From effbb7205f6c259562b999ca33223c5699b9264c Mon Sep 17 00:00:00 2001 From: Aymeric Weinbach <397730+zecloud@users.noreply.github.com> Date: Fri, 9 Jun 2023 14:32:39 +0200 Subject: [PATCH] update readme for an environment variable for transformers cache --- README.md | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 2be2a86b..35ca4779 100644 --- a/README.md +++ b/README.md @@ -148,11 +148,12 @@ good clips: To use Gpu with Docker [install the appropriate drivers and the NVIDIA Container Runtime](https://docs.docker.com/config/containers/resource_constraints/#gpu). If you have a modern GPU like RTX 4090 you can build yout docker image using Dockerfile.moderngpu file. If you have an older GPU like K80 or V100 you can use the other Dockerfile . -Download models and put them in a models folder +Download models and put them in a models folder, create an empty transformers folder to serve as download cache for huggingface transformers. Mount it as a volume in your container It's also useful to mount another volume for the outputs so create an outputs folder too + ```shell - docker run -v models:/src/models outputs:/outputs -e "TORTOISE_MODELS_DIR=/src/models" --rm --gpus all tortoise_tts:latest python tortoise/do_tts.py --text "I'm going to speak this" --voice random --preset fast --output_path /outputs + docker run -v models:/src/models outputs:/outputs -e "TORTOISE_MODELS_DIR=/src/models" -e "TRANSFORMERS_CACHE=/src/models/transformers" --rm --gpus all tortoise_tts:latest python tortoise/do_tts.py --text "I'm going to speak this" --voice random --preset fast --output_path /outputs ``` ## Advanced Usage