diff --git a/docs/how_to/ebnf_guided_generation.rst b/docs/how_to/ebnf_guided_generation.rst index e830cc6..2d429e6 100644 --- a/docs/how_to/ebnf_guided_generation.rst +++ b/docs/how_to/ebnf_guided_generation.rst @@ -44,7 +44,7 @@ your choice. .. code:: python # Get tokenizer info - model_id = "meta-llama/Llama-3.2-1B-Instruct" + model_id = "Qwen/Qwen2.5-0.5B-Instruct" tokenizer = AutoTokenizer.from_pretrained(model_id) config = AutoConfig.from_pretrained(model_id) # This can be larger than tokenizer.vocab_size due to paddings diff --git a/docs/how_to/engine_integration.rst b/docs/how_to/engine_integration.rst index adef9ed..e1d8a44 100644 --- a/docs/how_to/engine_integration.rst +++ b/docs/how_to/engine_integration.rst @@ -49,7 +49,7 @@ logits. To be safe, always pass in the former when instantiating ``xgr.Tokenizer .. code:: python # Get tokenizer info - model_id = "meta-llama/Llama-3.2-1B-Instruct" + model_id = "Qwen/Qwen2.5-0.5B-Instruct" tokenizer = AutoTokenizer.from_pretrained(model_id) config = AutoConfig.from_pretrained(model_id) # This can be larger than tokenizer.vocab_size due to paddings @@ -174,7 +174,7 @@ to generate a valid JSON. from transformers import AutoTokenizer, AutoConfig # Get tokenizer info - model_id = "meta-llama/Llama-3.2-1B-Instruct" + model_id = "Qwen/Qwen2.5-0.5B-Instruct" tokenizer = AutoTokenizer.from_pretrained(model_id) config = AutoConfig.from_pretrained(model_id) # This can be larger than tokenizer.vocab_size due to paddings diff --git a/docs/how_to/json_generation.rst b/docs/how_to/json_generation.rst index 329fd28..f67eb89 100644 --- a/docs/how_to/json_generation.rst +++ b/docs/how_to/json_generation.rst @@ -45,7 +45,7 @@ your choice. .. code:: python # Get tokenizer info - model_id = "meta-llama/Llama-3.2-1B-Instruct" + model_id = "Qwen/Qwen2.5-0.5B-Instruct" tokenizer = AutoTokenizer.from_pretrained(model_id) config = AutoConfig.from_pretrained(model_id) # This can be larger than tokenizer.vocab_size due to paddings