Structured Outputs#
You can specify a JSON schema, regular expression or EBNF to constrain the model output. The model output will be guaranteed to follow the given constraints. Only one constraint parameter (json_schema
, regex
, or ebnf
) can be specified for a request.
SGLang supports two grammar backends:
Outlines (default): Supports JSON schema and regular expression constraints.
XGrammar: Supports JSON schema, regular expression, and EBNF constraints.
Llguidance: Supports JSON schema, regular expression, and EBNF constraints.
We suggest using XGrammar for its better performance and utility. XGrammar currently uses the GGML BNF format. For more details, see XGrammar technical overview.
To use Xgrammar, simply add --grammar-backend xgrammar
when launching the server. To use llguidance, add --grammar-backend llguidance
when launching the server. If no backend is specified, Outlines will be used as the default.
For better output quality, It’s advisable to explicitly include instructions in the prompt to guide the model to generate the desired format. For example, you can specify, ‘Please generate the output in the following JSON format: …’.
OpenAI Compatible API#
[1]:
import openai
import os
from sglang.test.test_utils import is_in_ci
if is_in_ci():
from patch import launch_server_cmd
else:
from sglang.utils import launch_server_cmd
from sglang.utils import wait_for_server, print_highlight, terminate_process
os.environ["TOKENIZERS_PARALLELISM"] = "false"
server_process, port = launch_server_cmd(
"python -m sglang.launch_server --model-path meta-llama/Meta-Llama-3.1-8B-Instruct --host 0.0.0.0 --grammar-backend xgrammar"
)
wait_for_server(f"http://localhost:{port}")
client = openai.Client(base_url=f"http://127.0.0.1:{port}/v1", api_key="None")
[2025-03-16 16:14:04] server_args=ServerArgs(model_path='meta-llama/Meta-Llama-3.1-8B-Instruct', tokenizer_path='meta-llama/Meta-Llama-3.1-8B-Instruct', tokenizer_mode='auto', skip_tokenizer_init=False, load_format='auto', trust_remote_code=False, dtype='auto', kv_cache_dtype='auto', quantization=None, quantization_param_path=None, context_length=None, device='cuda', served_model_name='meta-llama/Meta-Llama-3.1-8B-Instruct', chat_template=None, is_embedding=False, revision=None, host='0.0.0.0', port=39641, mem_fraction_static=0.88, max_running_requests=200, max_total_tokens=20480, chunked_prefill_size=8192, max_prefill_tokens=16384, schedule_policy='fcfs', schedule_conservativeness=1.0, cpu_offload_gb=0, page_size=1, tp_size=1, stream_interval=1, stream_output=False, random_seed=212958618, constrained_json_whitespace_pattern=None, watchdog_timeout=300, dist_timeout=None, download_dir=None, base_gpu_id=0, gpu_id_step=1, log_level='info', log_level_http=None, log_requests=False, log_requests_level=0, show_time_cost=False, enable_metrics=False, decode_log_interval=40, api_key=None, file_storage_path='sglang_storage', enable_cache_report=False, reasoning_parser=None, dp_size=1, load_balance_method='round_robin', ep_size=1, dist_init_addr=None, nnodes=1, node_rank=0, json_model_override_args='{}', lora_paths=None, max_loras_per_batch=8, lora_backend='triton', attention_backend='flashinfer', sampling_backend='flashinfer', grammar_backend='xgrammar', speculative_algorithm=None, speculative_draft_model_path=None, speculative_num_steps=5, speculative_eagle_topk=4, speculative_num_draft_tokens=8, speculative_accept_threshold_single=1.0, speculative_accept_threshold_acc=1.0, speculative_token_map=None, enable_double_sparsity=False, ds_channel_config_path=None, ds_heavy_channel_num=32, ds_heavy_token_num=256, ds_heavy_channel_type='qk', ds_sparse_decode_threshold=4096, disable_radix_cache=False, disable_cuda_graph=True, disable_cuda_graph_padding=False, enable_nccl_nvls=False, disable_outlines_disk_cache=False, disable_custom_all_reduce=False, disable_mla=False, disable_overlap_schedule=False, enable_mixed_chunk=False, enable_dp_attention=False, enable_ep_moe=False, enable_torch_compile=False, torch_compile_max_bs=32, cuda_graph_max_bs=160, cuda_graph_bs=None, torchao_config='', enable_nan_detection=False, enable_p2p_check=False, triton_attention_reduce_in_fp32=False, triton_attention_num_kv_splits=8, num_continuous_decode_steps=1, delete_ckpt_after_loading=False, enable_memory_saver=False, allow_auto_truncate=False, enable_custom_logit_processor=False, tool_call_parser=None, enable_hierarchical_cache=False, enable_flashinfer_mla=False, enable_flashmla=False, flashinfer_mla_disable_ragged=False, warmups=None, debug_tensor_dump_output_folder=None, debug_tensor_dump_input_file=None, debug_tensor_dump_inject=False)
[2025-03-16 16:14:27 TP0] Init torch distributed begin.
[2025-03-16 16:14:27 TP0] Init torch distributed ends. mem usage=13.12 GB
[2025-03-16 16:14:27 TP0] Load weight begin. avail mem=44.72 GB
[2025-03-16 16:14:27 TP0] The following error message 'operation scheduled before its operands' can be ignored.
[2025-03-16 16:14:29 TP0] Using model weights format ['*.safetensors']
Loading safetensors checkpoint shards: 0% Completed | 0/4 [00:00<?, ?it/s]
Loading safetensors checkpoint shards: 25% Completed | 1/4 [00:00<00:02, 1.02it/s]
Loading safetensors checkpoint shards: 50% Completed | 2/4 [00:01<00:01, 1.53it/s]
Loading safetensors checkpoint shards: 75% Completed | 3/4 [00:02<00:00, 1.34it/s]
Loading safetensors checkpoint shards: 100% Completed | 4/4 [00:03<00:00, 1.19it/s]
Loading safetensors checkpoint shards: 100% Completed | 4/4 [00:03<00:00, 1.23it/s]
[2025-03-16 16:14:32 TP0] Load weight end. type=LlamaForCausalLM, dtype=torch.bfloat16, avail mem=27.54 GB, mem usage=17.18 GB.
[2025-03-16 16:14:32 TP0] KV Cache is allocated. #tokens: 20480, K size: 1.25 GB, V size: 1.25 GB
[2025-03-16 16:14:32 TP0] Memory pool end. avail mem=24.75 GB
[2025-03-16 16:14:33 TP0] max_total_num_tokens=20480, chunked_prefill_size=8192, max_prefill_tokens=16384, max_running_requests=200, context_len=131072
[2025-03-16 16:14:36] INFO: Started server process [3431718]
[2025-03-16 16:14:36] INFO: Waiting for application startup.
[2025-03-16 16:14:36] INFO: Application startup complete.
[2025-03-16 16:14:36] INFO: Uvicorn running on http://0.0.0.0:39641 (Press CTRL+C to quit)
[2025-03-16 16:14:36] INFO: 127.0.0.1:35782 - "GET /v1/models HTTP/1.1" 200 OK
[2025-03-16 16:14:37] INFO: 127.0.0.1:35784 - "GET /get_model_info HTTP/1.1" 200 OK
[2025-03-16 16:14:37 TP0] Prefill batch. #new-seq: 1, #new-token: 7, #cached-token: 0, token usage: 0.00, #running-req: 0, #queue-req: 0,
[2025-03-16 16:14:39] INFO: 127.0.0.1:35792 - "POST /generate HTTP/1.1" 200 OK
[2025-03-16 16:14:39] The server is fired up and ready to roll!
NOTE: Typically, the server runs in a separate terminal.
In this notebook, we run the server and notebook code together, so their outputs are combined.
To improve clarity, the server logs are displayed in the original black color, while the notebook outputs are highlighted in blue.
We are running those notebooks in a CI parallel environment, so the throughput is not representative of the actual performance.
JSON#
you can directly define a JSON schema or use Pydantic to define and validate the response.
Using Pydantic
[2]:
from pydantic import BaseModel, Field
# Define the schema using Pydantic
class CapitalInfo(BaseModel):
name: str = Field(..., pattern=r"^\w+$", description="Name of the capital city")
population: int = Field(..., description="Population of the capital city")
response = client.chat.completions.create(
model="meta-llama/Meta-Llama-3.1-8B-Instruct",
messages=[
{
"role": "user",
"content": "Please generate the information of the capital of France in the JSON format.",
},
],
temperature=0,
max_tokens=128,
response_format={
"type": "json_schema",
"json_schema": {
"name": "foo",
# convert the pydantic model to json schema
"schema": CapitalInfo.model_json_schema(),
},
},
)
response_content = response.choices[0].message.content
# validate the JSON response by the pydantic model
capital_info = CapitalInfo.model_validate_json(response_content)
print_highlight(f"Validated response: {capital_info.model_dump_json()}")
[2025-03-16 16:14:42 TP0] Prefill batch. #new-seq: 1, #new-token: 48, #cached-token: 1, token usage: 0.00, #running-req: 0, #queue-req: 0,
[2025-03-16 16:14:44] INFO: 127.0.0.1:41760 - "POST /v1/chat/completions HTTP/1.1" 200 OK
JSON Schema Directly
[3]:
import json
json_schema = json.dumps(
{
"type": "object",
"properties": {
"name": {"type": "string", "pattern": "^[\\w]+$"},
"population": {"type": "integer"},
},
"required": ["name", "population"],
}
)
response = client.chat.completions.create(
model="meta-llama/Meta-Llama-3.1-8B-Instruct",
messages=[
{
"role": "user",
"content": "Give me the information of the capital of France in the JSON format.",
},
],
temperature=0,
max_tokens=128,
response_format={
"type": "json_schema",
"json_schema": {"name": "foo", "schema": json.loads(json_schema)},
},
)
print_highlight(response.choices[0].message.content)
[2025-03-16 16:14:44 TP0] Prefill batch. #new-seq: 1, #new-token: 19, #cached-token: 30, token usage: 0.00, #running-req: 0, #queue-req: 0,
[2025-03-16 16:14:44] INFO: 127.0.0.1:41760 - "POST /v1/chat/completions HTTP/1.1" 200 OK
EBNF#
[4]:
ebnf_grammar = """
root ::= city | description
city ::= "London" | "Paris" | "Berlin" | "Rome"
description ::= city " is " status
status ::= "the capital of " country
country ::= "England" | "France" | "Germany" | "Italy"
"""
response = client.chat.completions.create(
model="meta-llama/Meta-Llama-3.1-8B-Instruct",
messages=[
{"role": "system", "content": "You are a helpful geography bot."},
{
"role": "user",
"content": "Give me the information of the capital of France.",
},
],
temperature=0,
max_tokens=32,
extra_body={"ebnf": ebnf_grammar},
)
print_highlight(response.choices[0].message.content)
[2025-03-16 16:14:44 TP0] Prefill batch. #new-seq: 1, #new-token: 27, #cached-token: 25, token usage: 0.00, #running-req: 0, #queue-req: 0,
[2025-03-16 16:14:44 TP0] Decode batch. #running-req: 1, #token: 55, token usage: 0.00, gen throughput (token/s): 3.51, #queue-req: 0,
[2025-03-16 16:14:44] INFO: 127.0.0.1:41760 - "POST /v1/chat/completions HTTP/1.1" 200 OK
Regular expression#
[5]:
response = client.chat.completions.create(
model="meta-llama/Meta-Llama-3.1-8B-Instruct",
messages=[
{"role": "user", "content": "What is the capital of France?"},
],
temperature=0,
max_tokens=128,
extra_body={"regex": "(Paris|London)"},
)
print_highlight(response.choices[0].message.content)
[2025-03-16 16:14:44 TP0] Prefill batch. #new-seq: 1, #new-token: 12, #cached-token: 30, token usage: 0.00, #running-req: 0, #queue-req: 0,
[2025-03-16 16:14:44] INFO: 127.0.0.1:41760 - "POST /v1/chat/completions HTTP/1.1" 200 OK
Native API and SGLang Runtime (SRT)#
JSON#
Using Pydantic
[6]:
import requests
import json
from pydantic import BaseModel, Field
# Define the schema using Pydantic
class CapitalInfo(BaseModel):
name: str = Field(..., pattern=r"^\w+$", description="Name of the capital city")
population: int = Field(..., description="Population of the capital city")
# Make API request
response = requests.post(
f"http://localhost:{port}/generate",
json={
"text": "Here is the information of the capital of France in the JSON format.\n",
"sampling_params": {
"temperature": 0,
"max_new_tokens": 64,
"json_schema": json.dumps(CapitalInfo.model_json_schema()),
},
},
)
print_highlight(response.json())
response_data = json.loads(response.json()["text"])
# validate the response by the pydantic model
capital_info = CapitalInfo.model_validate(response_data)
print_highlight(f"Validated response: {capital_info.model_dump_json()}")
[2025-03-16 16:14:45 TP0] Prefill batch. #new-seq: 1, #new-token: 14, #cached-token: 1, token usage: 0.00, #running-req: 0, #queue-req: 0,
[2025-03-16 16:14:45] INFO: 127.0.0.1:41774 - "POST /generate HTTP/1.1" 200 OK
JSON Schema Directly
[7]:
json_schema = json.dumps(
{
"type": "object",
"properties": {
"name": {"type": "string", "pattern": "^[\\w]+$"},
"population": {"type": "integer"},
},
"required": ["name", "population"],
}
)
# JSON
response = requests.post(
f"http://localhost:{port}/generate",
json={
"text": "Here is the information of the capital of France in the JSON format.\n",
"sampling_params": {
"temperature": 0,
"max_new_tokens": 64,
"json_schema": json_schema,
},
},
)
print_highlight(response.json())
[2025-03-16 16:14:45 TP0] Prefill batch. #new-seq: 1, #new-token: 1, #cached-token: 14, token usage: 0.00, #running-req: 0, #queue-req: 0,
[2025-03-16 16:14:45 TP0] Decode batch. #running-req: 1, #token: 30, token usage: 0.00, gen throughput (token/s): 38.40, #queue-req: 0,
[2025-03-16 16:14:45] INFO: 127.0.0.1:41784 - "POST /generate HTTP/1.1" 200 OK
EBNF#
[8]:
import requests
response = requests.post(
f"http://localhost:{port}/generate",
json={
"text": "Give me the information of the capital of France.",
"sampling_params": {
"max_new_tokens": 128,
"temperature": 0,
"n": 3,
"ebnf": (
"root ::= city | description\n"
'city ::= "London" | "Paris" | "Berlin" | "Rome"\n'
'description ::= city " is " status\n'
'status ::= "the capital of " country\n'
'country ::= "England" | "France" | "Germany" | "Italy"'
),
},
"stream": False,
"return_logprob": False,
},
)
print_highlight(response.json())
[2025-03-16 16:14:45 TP0] Prefill batch. #new-seq: 1, #new-token: 10, #cached-token: 1, token usage: 0.00, #running-req: 0, #queue-req: 0,
[2025-03-16 16:14:45 TP0] Prefill batch. #new-seq: 3, #new-token: 3, #cached-token: 30, token usage: 0.00, #running-req: 0, #queue-req: 0,
[2025-03-16 16:14:46] INFO: 127.0.0.1:41786 - "POST /generate HTTP/1.1" 200 OK
Regular expression#
[9]:
response = requests.post(
f"http://localhost:{port}/generate",
json={
"text": "Paris is the capital of",
"sampling_params": {
"temperature": 0,
"max_new_tokens": 64,
"regex": "(France|England)",
},
},
)
print_highlight(response.json())
[2025-03-16 16:14:46 TP0] Prefill batch. #new-seq: 1, #new-token: 5, #cached-token: 1, token usage: 0.00, #running-req: 0, #queue-req: 0,
[2025-03-16 16:14:46] INFO: 127.0.0.1:41794 - "POST /generate HTTP/1.1" 200 OK
[10]:
terminate_process(server_process)
Offline Engine API#
[11]:
import sglang as sgl
llm = sgl.Engine(
model_path="meta-llama/Meta-Llama-3.1-8B-Instruct", grammar_backend="xgrammar"
)
Loading safetensors checkpoint shards: 0% Completed | 0/4 [00:00<?, ?it/s]
Loading safetensors checkpoint shards: 25% Completed | 1/4 [00:01<00:03, 1.03s/it]
Loading safetensors checkpoint shards: 50% Completed | 2/4 [00:01<00:01, 1.49it/s]
Loading safetensors checkpoint shards: 75% Completed | 3/4 [00:02<00:00, 1.36it/s]
Loading safetensors checkpoint shards: 100% Completed | 4/4 [00:03<00:00, 1.09it/s]
Loading safetensors checkpoint shards: 100% Completed | 4/4 [00:03<00:00, 1.15it/s]
JSON#
Using Pydantic
[12]:
import json
from pydantic import BaseModel, Field
prompts = [
"Give me the information of the capital of China in the JSON format.",
"Give me the information of the capital of France in the JSON format.",
"Give me the information of the capital of Ireland in the JSON format.",
]
# Define the schema using Pydantic
class CapitalInfo(BaseModel):
name: str = Field(..., pattern=r"^\w+$", description="Name of the capital city")
population: int = Field(..., description="Population of the capital city")
sampling_params = {
"temperature": 0.1,
"top_p": 0.95,
"json_schema": json.dumps(CapitalInfo.model_json_schema()),
}
outputs = llm.generate(prompts, sampling_params)
for prompt, output in zip(prompts, outputs):
print_highlight("===============================")
print_highlight(f"Prompt: {prompt}") # validate the output by the pydantic model
capital_info = CapitalInfo.model_validate_json(output["text"])
print_highlight(f"Validated output: {capital_info.model_dump_json()}")
JSON Schema Directly
[13]:
prompts = [
"Give me the information of the capital of China in the JSON format.",
"Give me the information of the capital of France in the JSON format.",
"Give me the information of the capital of Ireland in the JSON format.",
]
json_schema = json.dumps(
{
"type": "object",
"properties": {
"name": {"type": "string", "pattern": "^[\\w]+$"},
"population": {"type": "integer"},
},
"required": ["name", "population"],
}
)
sampling_params = {"temperature": 0.1, "top_p": 0.95, "json_schema": json_schema}
outputs = llm.generate(prompts, sampling_params)
for prompt, output in zip(prompts, outputs):
print_highlight("===============================")
print_highlight(f"Prompt: {prompt}\nGenerated text: {output['text']}")
Generated text: {"name": "Beijing", "population": 21500000}
Generated text: {"name": "Paris", "population": 2141000}
Generated text: {"name": "Dublin", "population": 527617}
EBNF#
[14]:
prompts = [
"Give me the information of the capital of France.",
"Give me the information of the capital of Germany.",
"Give me the information of the capital of Italy.",
]
sampling_params = {
"temperature": 0.8,
"top_p": 0.95,
"ebnf": (
"root ::= city | description\n"
'city ::= "London" | "Paris" | "Berlin" | "Rome"\n'
'description ::= city " is " status\n'
'status ::= "the capital of " country\n'
'country ::= "England" | "France" | "Germany" | "Italy"'
),
}
outputs = llm.generate(prompts, sampling_params)
for prompt, output in zip(prompts, outputs):
print_highlight("===============================")
print_highlight(f"Prompt: {prompt}\nGenerated text: {output['text']}")
Generated text: Paris is the capital of France
Generated text: Berlin is the capital of Germany
Generated text: Rome is the capital of Italy
Regular expression#
[15]:
prompts = [
"Please provide information about London as a major global city:",
"Please provide information about Paris as a major global city:",
]
sampling_params = {"temperature": 0.8, "top_p": 0.95, "regex": "(France|England)"}
outputs = llm.generate(prompts, sampling_params)
for prompt, output in zip(prompts, outputs):
print_highlight("===============================")
print_highlight(f"Prompt: {prompt}\nGenerated text: {output['text']}")
Generated text: England
Generated text: France
[16]:
llm.shutdown()