Skip to content

Commit

Permalink
update readme on src/llmcompressor/transformers/finetune/README.md, n…
Browse files Browse the repository at this point in the history
…o support for FSDP

Signed-off-by: George Ohashi <[email protected]>
  • Loading branch information
horheynm committed Feb 26, 2025
1 parent b589079 commit 654b9b0
Showing 1 changed file with 4 additions and 47 deletions.
51 changes: 4 additions & 47 deletions src/llmcompressor/transformers/finetune/README.md
Original file line number Diff line number Diff line change
@@ -1,45 +1,5 @@
# Sparse Finetuning

## Launching from Console Scripts

### with DataParallel (default)

```bash
llmcompressor.transformers.text_generation.train
--model PATH_TO_MODEL
--distill_teacher PATH_TO_TEACHER
--dataset DATASET_NAME
--recipe PATH_TO_RECIPE
--output_dir PATH_TO_OUTPUT
--num_train_epochs 1
--splits "train"
```

Also supported:

* `llmcompressor.transformers.text_generation.finetune` (alias for train)
* `llmcompressor.transformers.text_generation.oneshot`
* `llmcompressor.transformers.text_generation.eval`
* `llmcompressor.transformers.text_generation.apply`(for running in sequential stage mode)
* `llmcompressor.transformers.text_generation.compress` (alias for apply)

### with FSDP

```bash
accelerate launch
--config_file example_fsdp_config.yaml
--no_python llmcompressor.transformers.text_generation.finetune
--model PATH_TO_MODEL
--distill_teacher PATH_TO_TEACHER
--dataset DATASET_NAME
--recipe PATH_TO_RECIPE
--output_dir PATH_TO_OUTPUT
--num_train_epochs 1
--splits "train"
```

See [configure_fsdp.md](../../../../examples/finetuning/configure_fsdp.md) for additional instructions on setting up FSDP configuration

## Launching from Python

```python
Expand Down Expand Up @@ -74,10 +34,10 @@ train(

Finetuning arguments are split up into 3 groups:

* ModelArguments: `src/llmcompressor/transformers/utils/arg_parser/model_arguments.py`
* TrainingArguments: `src/llmcompressor/transformers/utils/arg_parser/training_arguments.py`
* DatasetArguments: `src/llmcompressor/transformers/utils/arg_parser/dataset_arguments.py`
* RecipeArguments: `src/llmcompressor/transformers/utils/arg_parser/recipe_arguments.py`
* ModelArguments: `src/llmcompressor/args/model_arguments.py`
* TrainingArguments: `src/llmcompressor/args/training_arguments.py`
* DatasetArguments: `src/llmcompressor/args/dataset_arguments.py`
* RecipeArguments: `src/llmcompressor/args/recipe_arguments.py`


## Running Multi-Stage Recipes
Expand All @@ -90,9 +50,6 @@ mode.
See [example_alternating_recipe.yaml](../../../../examples/finetuning/example_alternating_recipe.yaml) for an example
of a staged recipe for Llama.

### Python Example
(This can also be run with FSDP by launching the script as `accelerate launch --config_file example_fsdp_config.yaml test_multi.py`)

test_multi.py
```python
from llmcompressor.transformers import apply
Expand Down

0 comments on commit 654b9b0

Please sign in to comment.