Skip to content

Commit

Permalink
update readme
Browse files Browse the repository at this point in the history
  • Loading branch information
ToTheBeginning committed Sep 15, 2024
1 parent 057ea7b commit 476c6a0
Showing 1 changed file with 2 additions and 0 deletions.
2 changes: 2 additions & 0 deletions docs/pulid_for_flux.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ If PuLID-FLUX is helpful, please help to ⭐ this repo or recommend it to your f
### Local Gradio Demo
You first need to follow the [dependencies-and-installation](../README.md#wrench-dependencies-and-installation) to set
up the environment, and download the `flux1-dev.safetensors` (if you want to use bf16 rather than fp8) and `ae.safetensors` from [black-forest-labs/FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev/tree/main).
The PuLID-FLUX model will be automatically downloaded from [huggingface](https://huggingface.co/guozinan/PuLID/tree/main).

There are following four options to run the gradio demo:

Expand All @@ -18,6 +19,7 @@ run `python app_flux.py --offload`, the peak memory is under 30GB.

#### fp8 + offload (for consumer-grade GPUs)
To use fp8, you need to make sure you have installed `requirements-fp8.txt`, it includes `optimum-quanto` and higher version of PyTorch.
We use `flux-dev-fp8` checkpoint from [XLabs-AI/flux-dev-fp8](https://huggingface.co/XLabs-AI/flux-dev-fp8), it will be automatically downloaded. You can also download it manually and put it in the models folder

Run `python app_flux.py --offload --fp8 --onnx_provider cpu`, the peak memory is under 15GB, this is for GPU with 16GB memory.

Expand Down

0 comments on commit 476c6a0

Please sign in to comment.