Install SGLang#
You can install SGLang using one of the methods below.
This page primarily applies to common NVIDIA GPU platforms. For other or newer platforms, please refer to the dedicated pages for NVIDIA Blackwell GPUs, AMD GPUs, Intel Xeon CPUs, NVIDIA Jetson, Ascend NPUs.
Method 1: With pip or uv#
It is recommended to use uv for faster installation:
pip install --upgrade pip
pip install uv
uv pip install "sglang[all]>=0.5.0rc0"
Quick fixes to common problems
If you encounter
OSError: CUDA_HOME environment variable is not set
. Please set it to your CUDA install root with either of the following solutions:Use
export CUDA_HOME=/usr/local/cuda-<your-cuda-version>
to set theCUDA_HOME
environment variable.Install FlashInfer first following FlashInfer installation doc, then install SGLang as described above.
SGLang currently uses torch 2.8 and flashinfer for torch 2.8. If you want to install flashinfer separately, please refer to FlashInfer installation doc. Please note that the FlashInfer pypi package is called
flashinfer-python
instead offlashinfer
.
Method 2: From source#
# Use the last release branch
git clone -b v0.5.0rc0 https://github.com/sgl-project/sglang.git
cd sglang
# Install the python packages
pip install --upgrade pip
pip install -e "python[all]"
Quick fixes to common problems
If you want to develop SGLang, it is recommended to use docker. Please refer to setup docker container. The docker image is
lmsysorg/sglang:dev
.SGLang currently uses torch 2.8 and flashinfer for torch 2.8. If you want to install flashinfer separately, please refer to FlashInfer installation doc. Please note that the FlashInfer pypi package is called
flashinfer-python
instead offlashinfer
.
Method 3: Using docker#
The docker images are available on Docker Hub at lmsysorg/sglang, built from Dockerfile.
Replace <secret>
below with your huggingface hub token.
docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server --model-path meta-llama/Llama-3.1-8B-Instruct --host 0.0.0.0 --port 30000
Method 4: Using Kubernetes#
Please check out OME, a Kubernetes operator for enterprise-grade management and serving of large language models (LLMs).
More
Option 1: For single node serving (typically when the model size fits into GPUs on one node)
Execute command
kubectl apply -f docker/k8s-sglang-service.yaml
, to create k8s deployment and service, with llama-31-8b as example.Option 2: For multi-node serving (usually when a large model requires more than one GPU node, such as
DeepSeek-R1
)Modify the LLM model path and arguments as necessary, then execute command
kubectl apply -f docker/k8s-sglang-distributed-sts.yaml
, to create two nodes k8s statefulset and serving service.
Method 5: Using docker compose#
More
This method is recommended if you plan to serve it as a service. A better approach is to use the k8s-sglang-service.yaml.
Copy the compose.yml to your local machine
Execute the command
docker compose up -d
in your terminal.
Method 6: Run on Kubernetes or Clouds with SkyPilot#
More
To deploy on Kubernetes or 12+ clouds, you can use SkyPilot.
Install SkyPilot and set up Kubernetes cluster or cloud access: see SkyPilot’s documentation.
Deploy on your own infra with a single command and get the HTTP API endpoint:
SkyPilot YAML: sglang.yaml
# sglang.yaml
envs:
HF_TOKEN: null
resources:
image_id: docker:lmsysorg/sglang:latest
accelerators: A100
ports: 30000
run: |
conda deactivate
python3 -m sglang.launch_server \
--model-path meta-llama/Llama-3.1-8B-Instruct \
--host 0.0.0.0 \
--port 30000
# Deploy on any cloud or Kubernetes cluster. Use --cloud <cloud> to select a specific cloud provider.
HF_TOKEN=<secret> sky launch -c sglang --env HF_TOKEN sglang.yaml
# Get the HTTP API endpoint
sky status --endpoint 30000 sglang
To further scale up your deployment with autoscaling and failure recovery, check out the SkyServe + SGLang guide.
Common Notes#
FlashInfer is the default attention kernel backend. It only supports sm75 and above. If you encounter any FlashInfer-related issues on sm75+ devices (e.g., T4, A10, A100, L4, L40S, H100), please switch to other kernels by adding
--attention-backend triton --sampling-backend pytorch
and open an issue on GitHub.To reinstall flashinfer locally, use the following command:
pip3 install --upgrade flashinfer-python --force-reinstall --no-deps
and then delete the cache withrm -rf ~/.cache/flashinfer
.If you only need to use OpenAI API models with the frontend language, you can avoid installing other dependencies by using
pip install "sglang[openai]"
.The language frontend operates independently of the backend runtime. You can install the frontend locally without needing a GPU, while the backend can be set up on a GPU-enabled machine. To install the frontend, run
pip install sglang
, and for the backend, usepip install sglang[srt]
.srt
is the abbreviation of SGLang runtime.