SGLang on AMD#

Introduction#

This document describes how to set up an AMD-based environment for SGLang. If you encounter issues or have questions, please open an issue on the SGLang repository.

System Configure#

When using AMD GPUs (such as MI300X), certain system-level optimizations help ensure stable performance. Here we take MI300X as an example. AMD provides official documentation for MI300X optimization and system tuning:

NOTE: We strongly recommend reading theses docs entirely guide to fully utilize your system.

Below are a few key settings to confirm or enable:

Update GRUB Settings#

In /etc/default/grub, append the following to GRUB_CMDLINE_LINUX:

pci=realloc=off iommu=pt

Afterward, run sudo update-grub (or your distro’s equivalent) and reboot.

Disable NUMA Auto-Balancing#

sudo sh -c 'echo 0 > /proc/sys/kernel/numa_balancing'

You can automate or verify this change using this helpful script.

Again, please go through the entire documentation to confirm your system is using the recommended configuration.

Installing SGLang#

For general installation instructions, see the official SGLang Installation Docs. Below are the AMD-specific steps summarized for convenience.

Install from Source#

git clone https://github.com/sgl-project/sglang.git
cd sglang

pip install --upgrade pip
pip install sgl-kernel --force-reinstall --no-deps
pip install -e "python[all_hip]"

Examples#

Running DeepSeek-V3#

The only difference in running DeepSeek-V3 is when starting the server. Here’s an example command:

drun -p 30000:30000 \
    -v ~/.cache/huggingface:/root/.cache/huggingface \
    --ipc=host \
    --env "HF_TOKEN=<secret>" \
    sglang_image \
    python3 -m sglang.launch_server \
    --model-path deepseek-ai/DeepSeek-V3 \ # <- here
    --tp 8 \
    --trust-remote-code \
    --host 0.0.0.0 \
    --port 30000

Running Llama3.1#

Running Llama3.1 is nearly identical. The only difference is in the model specified when starting the server, shown by the following example command:

drun -p 30000:30000 \
    -v ~/.cache/huggingface:/root/.cache/huggingface \
    --ipc=host \
    --env "HF_TOKEN=<secret>" \
    sglang_image \
    python3 -m sglang.launch_server \
    --model-path meta-llama/Meta-Llama-3.1-8B-Instruct \ # <- here
    --tp 8 \
    --trust-remote-code \
    --host 0.0.0.0 \
    --port 30000

Warmup Step#

When the server displays “The server is fired up and ready to roll!”, it means the startup is successful.