This is a downstream fork of skypilot-org/skypilot maintained by EmbeddedLLM. It tracks upstream releases and stacks a small set of custom patches on top.
| Upstream version | 0.12.0 |
| Branch | ellm-0.12.0 |
| Image | ghcr.io/embeddedllm/skypilot:v0.12.0 |
| Upstream repo | skypilot-org/skypilot |
| Tag | Meaning |
|---|---|
v0.12.0 | Stable, production-ready build based on upstream v0.12.0 |
v0.12.0-dev | Development build off ellm-0.12.0 branch, not yet stable |
| Patch | Commit | Files Changed | Description |
|---|---|---|---|
| Enable dual GPU in a single API server | 493fb1f |
sky/clouds/kubernetes.pysky/provision/kubernetes/utils.py
|
Makes get_node_accelerator_count check both nvidia.com/gpu
and amd.com/gpu resource keys so nodes with either vendor GPU report
a non-zero accelerator count.Note: the kubernetes.py resource-key selection from this patch
was superseded by 9cd2668, which derives the resource key from the
detected label formatter instead of skypilot.co/gpu node labels.
|
| Automatic AMD GPU detection via device plugin labels | 9cd2668 |
sky/provision/kubernetes/utils.pysky/catalog/kubernetes_catalog.pysky/clouds/kubernetes.pysky/utils/gpu_names.pysky/client/cli/command.py
|
AMD GPU nodes are detected automatically when the
AMD device plugin
is installed — no sky gpus label required.
Adds AMDGPULabelFormatter which reads
amd.com/gpu.product-name[.<GPU>] labels in both
single-GPU (name in value) and multi-GPU (name in key suffix) formats.
iGPUs/APUs (generic Radeon Graphics, Vega ≤16 CU, mobile NNNm) are filtered
at label-match time and never exposed as schedulable resources.
All GPU detection code paths — sky gpus list, per-node status,
pod scheduling, CPU-only node selection — now iterate all formatters per node,
enabling mixed NVIDIA + AMD clusters with no extra configuration.
Adds 33 AMD canonical GPU names (MI Instinct CDNA1–4, Radeon Pro W-series,
Radeon RX RDNA2/3) to the shared GPU name registry.
|
| Fix pod scheduling for mixed NVIDIA + AMD clusters | e9581a9 |
sky/provision/kubernetes/instance.py |
Fixes pod scheduling and SkyServe replica placement on AMD GPU nodes in
a mixed cluster. instance.py previously called
get_gpu_resource_key(context) (cluster-wide default) in three places,
which returned the wrong vendor key for the non-default GPU type:
SUPPORTED_GPU_RESOURCE_KEYS values instead of the cluster default.
|
| [Kubernetes] Fix podip endpoint in HA mode | f3b4561 |
sky/provision/kubernetes/network.py |
Fixes http://None endpoint when using high_availability
+ podip port mode for sky-serve-controller. In HA mode the controller
runs as a Deployment — Kubernetes assigns random pod name suffixes so the expected
{cluster_name}-head pod never exists. Fix uses label selectors instead
of pod name lookup, working correctly for both HA and non-HA modes.
|
Never commit directly to ellm-{version}. Create a feature branch off it, then open a PR back into it.
# 1. Branch off the active version branch
git checkout ellm-0.12.0
git checkout -b feat/my-feature
# 2. Make changes, commit
git add <files>
git commit -m "[Area] Description"
git push origin feat/my-feature
# 3. Open a PR → target ellm-0.12.0 (not master)
gh pr create --base ellm-0.12.0 --title "..." --body "..."Build and push a dev image to test before merging:
docker buildx build --push --platform linux/amd64 \
-t ghcr.io/embeddedllm/skypilot:v0.12.0-dev \
-f Dockerfile .Once the PR is merged and validated, promote to stable:
docker tag ghcr.io/embeddedllm/skypilot:v0.12.0-dev ghcr.io/embeddedllm/skypilot:v0.12.0
docker push ghcr.io/embeddedllm/skypilot:v0.12.0Deploying with Helm
The Helm chart is pinned to the same upstream version. Always specify --version explicitly:
helm upgrade --install $RELEASE_NAME skypilot/skypilot \
--version 0.12.0 \
--namespace $NAMESPACE \
--create-namespace \
--set apiService.image=ghcr.io/embeddedllm/skypilot:v0.12.0 \
--set ingress.authCredentials=$AUTH_STRINGWhen moving to a new upstream version (e.g.
v0.13.0), update both--versionand--set apiService.imagetogether. The Helm chart version must always match the image version.
Each upstream version gets its own branch (ellm-0.12.0, ellm-0.13.0, ...). Old branches are kept as rollback points.
# 1. Sync master with upstream
git checkout master
git fetch upstream
git merge upstream/master
git push origin master
# 2. Create new branch from updated master
git checkout -b ellm-{new_version}
# 3. Cherry-pick custom patches (commit hashes from the table above)
git cherry-pick 493fb1f # Enable dual GPU in a single API server
git cherry-pick f3b4561 # Fix podip endpoint in HA mode
git cherry-pick 9cd2668 # Automatic AMD GPU detection via device plugin labels
git cherry-pick e9581a9 # Fix pod scheduling for mixed NVIDIA + AMD clusters
# Resolve any conflicts if upstream changed the same files
# 4. Push new branch
git push origin ellm-{new_version}After creating the new branch, update this README: bump Upstream version, Branch, Image, and the commit hashes in the patch table. Build and push the new image as
ghcr.io/embeddedllm/skypilot:v{new_version}.
SkyPilot is a system to run, manage, and scale AI workloads on any AI infrastructure.
SkyPilot gives AI teams a simple interface to run jobs on any infra. Infra teams get a unified control plane to manage any AI compute — with advanced scheduling, scaling, and orchestration.
🔥 News 🔥
- [Dec 2025] SkyPilot v0.11 released: Multi-Cloud Pools, Fast Managed Jobs, Enterprise-Readiness at Large Scale, Programmability. Release notes
- [Dec 2025] SkyPilot Pools released: Run batch inference and other jobs on a managed pool of warm workers (across clouds or clusters). blog, docs
- [Dec 2025] Train an agent to use Google Search as a tool with RL on your Kubernetes or clouds: blog, example
- [Nov 2025] Serve Kimi K2 Thinking with reasoning capabilities on your Kubernetes or clouds: example
- [Oct 2025] Run RL training for LLMs with SkyRL on your Kubernetes or clouds: example
- [Oct 2025] Train and serve Andrej Karpathy's nanochat - the best ChatGPT that $100 can buy: example
- [Oct 2025] Run large-scale LLM training with TorchTitan on any AI infra: example
- [Sep 2025] Scaling AI infrastructure at Abridge - 10x faster development with SkyPilot: blog
- [Sep 2025] Network and Storage Benchmarks for LLM training on the cloud: blog
- [Aug 2025] Serve and finetune OpenAI GPT-OSS models (gpt-oss-120b, gpt-oss-20b) with one command on any infra: serve + LoRA and full finetuning
- [Jul 2025] Run distributed RL training for LLMs with Verl (PPO, GRPO) on any cloud: example
SkyPilot is easy to use for AI teams:
- Quickly spin up compute on your own infra
- Environment and job as code — simple and portable
- Easy job management: queue, run, and auto-recover many jobs
SkyPilot makes Kubernetes easy for AI & Infra teams:
- Slurm-like ease of use, cloud-native robustness
- Local dev experience on K8s: SSH into pods, sync code, or connect IDE
- Turbocharge your clusters: gang scheduling, multi-cluster, and scaling
SkyPilot unifies multiple clusters, clouds, and hardware:
- One interface to use reserved GPUs, Kubernetes clusters, Slurm clusters, or 20+ clouds
- Flexible provisioning of GPUs, TPUs, CPUs, with auto-retry
- Team deployment and resource sharing
SkyPilot cuts your cloud costs & maximizes GPU availability:
- Autostop: automatic cleanup of idle resources
- Spot instance support: 3-6x cost savings, with preemption auto-recovery
- Intelligent scheduling: automatically run on the cheapest & most available infra
SkyPilot supports your existing GPU, TPU, and CPU workloads, with no code changes.
Install with pip:
# Choose your clouds:
pip install -U "skypilot[kubernetes,aws,gcp,azure,oci,nebius,lambda,runpod,fluidstack,paperspace,cudo,ibm,scp,seeweb,shadeform,verda]"To get the latest features and fixes, use the nightly build or install from source:
# Choose your clouds:
pip install "skypilot-nightly[kubernetes,aws,gcp,azure,oci,nebius,lambda,runpod,fluidstack,paperspace,cudo,ibm,scp,seeweb,shadeform,verda]"To use SkyPilot directly with your agent (Claude Code, Codex, etc.), install the SkyPilot Skill. Tell your agent:
Fetch and follow https://github.com/skypilot-org/skypilot/blob/HEAD/agent/INSTALL.md to install the skypilot skill
Current supported infra: Kubernetes, Slurm, AWS, GCP, Azure, OCI, CoreWeave, Nebius, Lambda Cloud, RunPod, Fluidstack, Cudo, Digital Ocean, Paperspace, Cloudflare, Samsung, IBM, Vast.ai, VMware vSphere, Seeweb, Prime Intellect, Shadeform, Verda Cloud, VastData, Crusoe.
You can find our documentation here.
A SkyPilot task specifies: resource requirements, data to be synced, setup commands, and the task commands.
Once written in this unified interface (YAML or Python API), the task can be launched on any available infra (Kubernetes, Slurm, cloud, etc.). This avoids vendor lock-in, and allows easily moving jobs to a different provider.
Paste the following into a file my_task.yaml:
resources:
accelerators: A100:8 # 8x NVIDIA A100 GPU
num_nodes: 1 # Number of VMs to launch
# Working directory (optional) containing the project codebase.
# Its contents are synced to ~/sky_workdir/ on the cluster.
workdir: ~/torch_examples
# Commands to be run before executing the job.
# Typical use: pip install -r requirements.txt, git clone, etc.
setup: |
cd mnist
pip install -r requirements.txt
# Commands to run as a job.
# Typical use: launch the main program.
run: |
cd mnist
python main.py --epochs 1Prepare the workdir by cloning:
git clone https://github.com/pytorch/examples.git ~/torch_examplesLaunch with sky launch (note: access to GPU instances is needed for this example):
sky launch my_task.yamlSkyPilot then performs the heavy-lifting for you, including:
- Find the cheapest & available infra across your clusters or clouds
- Provision the GPUs (pods or VMs), with auto-failover if the infra returned capacity errors
- Sync your local
workdirto the provisioned cluster - Auto-install dependencies by running the task's
setupcommands - Run the task's
runcommands, and stream logs
See Quickstart to get started with SkyPilot.
See SkyPilot examples that cover: development, training, serving, LLM models, AI apps, and common frameworks.
Latest featured examples:
| Task | Examples |
|---|---|
| Training | Verl, Finetune Llama 4, TorchTitan, PyTorch, DeepSpeed, NeMo, Ray, Unsloth, Jax/TPU, OpenRLHF |
| Serving | vLLM, SGLang, Ollama |
| Models | DeepSeek-R1, Llama 4, Llama 3, CodeLlama, Qwen, Kimi-K2, Kimi-K2-Thinking, Mixtral |
| AI apps | RAG, vector databases (ChromaDB, CLIP) |
| Common frameworks | Airflow, Jupyter, marimo |
Source files can be found in llm/ and examples/.
To learn more, see SkyPilot Overview, SkyPilot docs, and SkyPilot blog.
SkyPilot adopters: Testimonials and Case Studies
Partners and integrations: Community Spotlights
Follow updates:
Read the research:
- SkyPilot paper and talk (NSDI 2023)
- Sky Computing whitepaper
- Sky Computing vision paper (HotOS 2021)
- SkyServe: AI serving across regions and clouds (EuroSys 2025)
- Managed jobs spot instance policy (NSDI 2024)
SkyPilot was initially started at the Sky Computing Lab at UC Berkeley and has since gained many industry contributors. To read about the project's origin and vision, see Concept: Sky Computing.
We are excited to hear your feedback:
- For issues and feature requests, please open a GitHub issue.
- For questions, please use GitHub Discussions.
For general discussions, join us on the SkyPilot Slack.
We welcome all contributions to the project! See CONTRIBUTING for how to get involved.
