Skip to content

Commit

Permalink
[Docs] Change some models to Qwen (#88)
Browse files Browse the repository at this point in the history
  • Loading branch information
CharlieFRuan authored Nov 22, 2024
1 parent ce42833 commit 8b47fa1
Show file tree
Hide file tree
Showing 3 changed files with 4 additions and 4 deletions.
2 changes: 1 addition & 1 deletion docs/how_to/ebnf_guided_generation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ your choice.
.. code:: python
# Get tokenizer info
model_id = "meta-llama/Llama-3.2-1B-Instruct"
model_id = "Qwen/Qwen2.5-0.5B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_id)
config = AutoConfig.from_pretrained(model_id)
# This can be larger than tokenizer.vocab_size due to paddings
Expand Down
4 changes: 2 additions & 2 deletions docs/how_to/engine_integration.rst
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ logits. To be safe, always pass in the former when instantiating ``xgr.Tokenizer
.. code:: python
# Get tokenizer info
model_id = "meta-llama/Llama-3.2-1B-Instruct"
model_id = "Qwen/Qwen2.5-0.5B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_id)
config = AutoConfig.from_pretrained(model_id)
# This can be larger than tokenizer.vocab_size due to paddings
Expand Down Expand Up @@ -174,7 +174,7 @@ to generate a valid JSON.
from transformers import AutoTokenizer, AutoConfig
# Get tokenizer info
model_id = "meta-llama/Llama-3.2-1B-Instruct"
model_id = "Qwen/Qwen2.5-0.5B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_id)
config = AutoConfig.from_pretrained(model_id)
# This can be larger than tokenizer.vocab_size due to paddings
Expand Down
2 changes: 1 addition & 1 deletion docs/how_to/json_generation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ your choice.
.. code:: python
# Get tokenizer info
model_id = "meta-llama/Llama-3.2-1B-Instruct"
model_id = "Qwen/Qwen2.5-0.5B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_id)
config = AutoConfig.from_pretrained(model_id)
# This can be larger than tokenizer.vocab_size due to paddings
Expand Down

0 comments on commit 8b47fa1

Please sign in to comment.