Skip to content

Commit

Permalink
Fix lint error
Browse files Browse the repository at this point in the history
  • Loading branch information
mreso committed Sep 1, 2023
1 parent 34e4549 commit 1437b91
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 2 deletions.
4 changes: 2 additions & 2 deletions docs/single_gpu.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ To run fine-tuning on a single GPU, we will make use of two packages

1- [PEFT](https://huggingface.co/blog/peft) methods and in specific using HuggingFace [PEFT](https://github.com/huggingface/peft)library.

2- [BitandBytes](https://github.com/TimDettmers/bitsandbytes) int8 quantization.
2- [bitsandbytes](https://github.com/TimDettmers/bitsandbytes) int8 quantization.

Given combination of PEFT and Int8 quantization, we would be able to fine_tune a Llama 2 7B model on one consumer grade GPU such as A10.

Expand All @@ -21,7 +21,7 @@ pip install -r requirements.txt

## How to run it?

Get access to a machine with one GPU or if using a multi-GPU macine please make sure to only make one of them visible using `export CUDA_VISIBLE_DEVICES=GPU:id` and run the following. It runs by default with `samsum_dataset` for summarization application.
Get access to a machine with one GPU or if using a multi-GPU machine please make sure to only make one of them visible using `export CUDA_VISIBLE_DEVICES=GPU:id` and run the following. It runs by default with `samsum_dataset` for summarization application.


```bash
Expand Down
1 change: 1 addition & 0 deletions scripts/spellcheck_conf/wordlist.txt
Original file line number Diff line number Diff line change
Expand Up @@ -1121,3 +1121,4 @@ summarization
xA
Sanitization
tokenization
bitsandbytes

0 comments on commit 1437b91

Please sign in to comment.