Skip to content

Commit

Permalink
update to main
Browse files Browse the repository at this point in the history
  • Loading branch information
HamidShojanazeri committed Aug 8, 2023
2 parents 51269b8 + 8fddaa9 commit c3a11c4
Show file tree
Hide file tree
Showing 9 changed files with 41 additions and 21 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ Llama 2 is a new technology that carries potential risks with use. Testing condu
**For more in depth information checkout the following:**

* [Single GPU Fine-tuning](./docs/single_gpu.md)
* [Multi-GPU Fine-tuning](./docs/mutli_gpu.md)
* [Multi-GPU Fine-tuning](./docs/multi_gpu.md)
* [LLM Fine-tuning](./docs/LLM_finetuning.md)
* [Adding custom datasets](./docs/Dataset.md)
* [Inference](./docs/inference.md)
Expand Down
19 changes: 19 additions & 0 deletions UPDATES.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
## System Prompt Update

### Observed Issue
We received feedback from the community on our prompt template and we are providing an update to reduce the false refusal rates seen. False refusals occur when the model incorrectly refuses to answer a question that it should, for example due to overly broad instructions to be cautious in how it provides responses.

### Updated approach
Based on evaluation and analysis, we recommend the removal of the system prompt as the default setting. Pull request [#626](https://github.com/facebookresearch/llama/pull/626) removes the system prompt as the default option, but still provides an example to help enable experimentation for those using it.

## Token Sanitization Update

### Observed Issue
The PyTorch scripts currently provided for tokenization and model inference allow for direct prompt injection via string concatenation. Prompt injections allow for the addition of special system and instruction prompt strings from user-provided prompts.

As noted in the documentation, these strings are required to use the fine-tuned chat models. However, prompt injections have also been used for manipulating or abusing models by bypassing their safeguards, allowing for the creation of content or behaviors otherwise outside the bounds of acceptable use.

### Updated approach
We recommend sanitizing [these strings](https://github.com/facebookresearch/llama#fine-tuned-chat-models) from any user provided prompts. Sanitization of user prompts mitigates malicious or accidental abuse of these strings. The provided scripts have been updated to do this.

Note: even with this update safety classifiers should still be applied to catch unsafe behaviors or content produced by the model. An [example](https://github.com/facebookresearch/llama-recipes/blob/main/inference/inference.py) of how to deploy such a classifier can be found in the llama-recipes repository.
2 changes: 1 addition & 1 deletion docs/mutli_gpu.md → docs/multi_gpu.md
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,7 @@ Currently 4 datasets are supported that can be found in [Datasets config file](.
* `alpaca_dataset` : to get this open source data please download the `aplaca.json` to `ft_dataset` folder.

```bash
wget -P ft_dataset https://github.com/tatsu-lab/stanford_alpaca/blob/main/alpaca_data.json
wget -P ft_datasets https://raw.githubusercontent.com/tatsu-lab/stanford_alpaca/main/alpaca_data.json
```

* `samsum_dataset`
Expand Down
2 changes: 1 addition & 1 deletion docs/single_gpu.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ Currently 4 datasets are supported that can be found in [Datasets config file](.
* `alpaca_dataset` : to get this open source data please download the `aplaca.json` to `ft_dataset` folder.

```bash
wget -P ft_dataset https://github.com/tatsu-lab/stanford_alpaca/blob/main/alpaca_data.json
wget -P ft_datasets https://raw.githubusercontent.com/tatsu-lab/stanford_alpaca/main/alpaca_data.json
```

* `samsum_dataset`
Expand Down
5 changes: 4 additions & 1 deletion ft_datasets/alpaca_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,9 @@ def __len__(self):
return len(self.ann)

def __getitem__(self, index):
IGNORE_INDEX = -100 # The default setting in CrossEntropyLoss


ann = self.ann[index]
if ann.get("input", "") == "":
prompt = PROMPT_DICT["prompt_no_input"].format_map(ann)
Expand All @@ -66,7 +69,7 @@ def __getitem__(self, index):
example_mask = example.ge(0)
label_mask = labels.ge(0)
example[~example_mask] = 0
labels[~label_mask] = 0
labels[~label_mask] = IGNORE_INDEX
example_mask = example_mask.float()
label_mask = label_mask.float()

Expand Down
17 changes: 3 additions & 14 deletions inference/chat_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,22 +16,11 @@ class Message(TypedDict):

B_INST, E_INST = "[INST]", "[/INST]"
B_SYS, E_SYS = "<<SYS>>\n", "\n<</SYS>>\n\n"
DEFAULT_SYSTEM_PROMPT = """\
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information."""

def format_tokens(dialogs, tokenizer):
prompt_tokens = []
for dialog in dialogs:
if dialog[0]["role"] != "system":
dialog = [
{
"role": "system",
"content": DEFAULT_SYSTEM_PROMPT,
}
] + dialog
dialog = [
if dialog[0]["role"] == "system":
dialog = [
{
"role": dialog[1]["role"],
"content": B_SYS
Expand All @@ -47,7 +36,7 @@ def format_tokens(dialogs, tokenizer):
"starting with user and alternating (u/a/u/a/u...)"
)
"""
Please verify that yout tokenizer support adding "[INST]", "[/INST]" to your inputs.
Please verify that your tokenizer support adding "[INST]", "[/INST]" to your inputs.
Here, we are adding it manually.
"""
dialog_tokens: List[int] = sum(
Expand Down
7 changes: 7 additions & 0 deletions inference/chats.json
Original file line number Diff line number Diff line change
Expand Up @@ -18,5 +18,12 @@
"content": "Always answer with emojis"
},
{"role": "user", "content": "How to go from Beijing to NY?"}
],
[
{
"role": "system",
"content": "You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information."
},
{"role": "user", "content": "Write a brief birthday message to John"}
]
]
4 changes: 3 additions & 1 deletion scripts/spellcheck_conf/wordlist.txt
Original file line number Diff line number Diff line change
Expand Up @@ -1118,4 +1118,6 @@ dataset's
jupyter
mutli
summarization
xA
xA
Sanitization
tokenization
4 changes: 2 additions & 2 deletions utils/train_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -172,14 +172,14 @@ def train(model, train_dataloader,eval_dataloader, tokenizer, optimizer, lr_sche
model_checkpointing.save_model_and_optimizer_sharded(model, rank, train_config)
if train_config.save_optimizer:
model_checkpointing.save_model_and_optimizer_sharded(model, rank, train_config, optim=optimizer)
print(" Saving the FSDP model checkpoints qnd optimizer using SHARDED_STATE_DICT")
print(" Saving the FSDP model checkpoints and optimizer using SHARDED_STATE_DICT")
print("=====================================================")

if not train_config.use_peft and train_config.save_optimizer:
model_checkpointing.save_optimizer_checkpoint(
model, optimizer, rank, train_config, epoch=epoch
)
print(" Saving the FSDP model checkpoints qnd optimizer using FULL_STATE_DICT")
print(" Saving the FSDP model checkpoints and optimizer using FULL_STATE_DICT")
print("=====================================================")
if train_config.enable_fsdp:
dist.barrier()
Expand Down

0 comments on commit c3a11c4

Please sign in to comment.