Skip to content

Commit

Permalink
docs: streamline readme and reuse content in other docs pages
Browse files Browse the repository at this point in the history
  • Loading branch information
eginhard committed Dec 12, 2024
1 parent cd79723 commit c1f5e90
Show file tree
Hide file tree
Showing 6 changed files with 202 additions and 394 deletions.
229 changes: 121 additions & 108 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,16 +1,7 @@
# 🐸Coqui TTS
## News
- 📣 Fork of the [original, unmaintained repository](https://github.com/coqui-ai/TTS). New PyPI package: [coqui-tts](https://pypi.org/project/coqui-tts)
- 📣 [OpenVoice](https://github.com/myshell-ai/OpenVoice) models now available for voice conversion.
- 📣 Prebuilt wheels are now also published for Mac and Windows (in addition to Linux as before) for easier installation across platforms.
- 📣 XTTSv2 is here with 17 languages and better performance across the board. XTTS can stream with <200ms latency.
- 📣 XTTS fine-tuning code is out. Check the [example recipes](https://github.com/idiap/coqui-ai-TTS/tree/dev/recipes/ljspeech).
- 📣 You can use [Fairseq models in ~1100 languages](https://github.com/facebookresearch/fairseq/tree/main/examples/mms) with 🐸TTS.
# <img src="https://raw.githubusercontent.com/idiap/coqui-ai-TTS/main/images/coqui-log-green-TTS.png" height="56"/>

## <img src="https://raw.githubusercontent.com/idiap/coqui-ai-TTS/main/images/coqui-log-green-TTS.png" height="56"/>


**🐸TTS is a library for advanced Text-to-Speech generation.**
**🐸 Coqui TTS is a library for advanced Text-to-Speech generation.**

🚀 Pretrained models in +1100 languages.

Expand All @@ -20,19 +11,26 @@
______________________________________________________________________

[![Discord](https://img.shields.io/discord/1037326658807533628?color=%239B59B6&label=chat%20on%20discord)](https://discord.gg/5eXr5seRrv)
![PyPI - Python Version](https://img.shields.io/pypi/pyversions/coqui-tts)
[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/coqui-tts)](https://pypi.org/project/coqui-tts/)
[![License](<https://img.shields.io/badge/License-MPL%202.0-brightgreen.svg>)](https://opensource.org/licenses/MPL-2.0)
[![PyPI version](https://badge.fury.io/py/coqui-tts.svg)](https://badge.fury.io/py/coqui-tts)
[![PyPI version](https://badge.fury.io/py/coqui-tts.svg)](https://pypi.org/project/coqui-tts/)
[![Downloads](https://pepy.tech/badge/coqui-tts)](https://pepy.tech/project/coqui-tts)
[![DOI](https://zenodo.org/badge/265612440.svg)](https://zenodo.org/badge/latestdoi/265612440)

![GithubActions](https://github.com/idiap/coqui-ai-TTS/actions/workflows/tests.yml/badge.svg)
![GithubActions](https://github.com/idiap/coqui-ai-TTS/actions/workflows/docker.yaml/badge.svg)
![GithubActions](https://github.com/idiap/coqui-ai-TTS/actions/workflows/style_check.yml/badge.svg)
[![GithubActions](https://github.com/idiap/coqui-ai-TTS/actions/workflows/tests.yml/badge.svg)](https://github.com/idiap/coqui-ai-TTS/actions/workflows/tests.yml)
[![GithubActions](https://github.com/idiap/coqui-ai-TTS/actions/workflows/docker.yaml/badge.svg)](https://github.com/idiap/coqui-ai-TTS/actions/workflows/docker.yaml)
[![GithubActions](https://github.com/idiap/coqui-ai-TTS/actions/workflows/style_check.yml/badge.svg)](https://github.com/idiap/coqui-ai-TTS/actions/workflows/style_check.yml)
[![Docs](<https://readthedocs.org/projects/coqui-tts/badge/?version=latest&style=plastic>)](https://coqui-tts.readthedocs.io/en/latest/)

</div>

## News
- 📣 Fork of the [original, unmaintained repository](https://github.com/coqui-ai/TTS). New PyPI package: [coqui-tts](https://pypi.org/project/coqui-tts)
- 📣 [OpenVoice](https://github.com/myshell-ai/OpenVoice) models now available for voice conversion.
- 📣 Prebuilt wheels are now also published for Mac and Windows (in addition to Linux as before) for easier installation across platforms.
- 📣 XTTSv2 is here with 17 languages and better performance across the board. XTTS can stream with <200ms latency.
- 📣 XTTS fine-tuning code is out. Check the [example recipes](https://github.com/idiap/coqui-ai-TTS/tree/dev/recipes/ljspeech).
- 📣 You can use [Fairseq models in ~1100 languages](https://github.com/facebookresearch/fairseq/tree/main/examples/mms) with 🐸TTS.

______________________________________________________________________

## 💬 Where to ask questions
Expand Down Expand Up @@ -117,8 +115,10 @@ repository are also still a useful source of information.

You can also help us implement more models.

<!-- start installation -->
## Installation
🐸TTS is tested on Ubuntu 24.04 with **python >= 3.9, < 3.13.**, but should also

🐸TTS is tested on Ubuntu 24.04 with **python >= 3.9, < 3.13**, but should also
work on Mac and Windows.

If you are only interested in [synthesizing speech](https://coqui-tts.readthedocs.io/en/latest/inference.html) with the pretrained 🐸TTS models, installing from PyPI is the easiest option.
Expand Down Expand Up @@ -159,13 +159,15 @@ pip install -e .[server,ja]

### Platforms

If you are on Ubuntu (Debian), you can also run following commands for installation.
If you are on Ubuntu (Debian), you can also run the following commands for installation.

```bash
make system-deps # intended to be used on Ubuntu (Debian). Let us know if you have a different OS.
make system-deps
make install
```

<!-- end installation -->

## Docker Image
You can also try out Coqui TTS without installation with the docker image.
Simply run the following command and you will be able to run TTS:
Expand All @@ -182,10 +184,10 @@ More details about the docker images (like GPU support) can be found


## Synthesizing speech by 🐸TTS

<!-- start inference -->
### 🐍 Python API

#### Running a multi-speaker and multi-lingual model
#### Multi-speaker and multi-lingual model

```python
import torch
Expand All @@ -197,47 +199,60 @@ device = "cuda" if torch.cuda.is_available() else "cpu"
# List available 🐸TTS models
print(TTS().list_models())

# Init TTS
# Initialize TTS
tts = TTS("tts_models/multilingual/multi-dataset/xtts_v2").to(device)

# List speakers
print(tts.speakers)

# Run TTS
# ❗ Since this model is multi-lingual voice cloning model, we must set the target speaker_wav and language
# Text to speech list of amplitude values as output
wav = tts.tts(text="Hello world!", speaker_wav="my/cloning/audio.wav", language="en")
# Text to speech to a file
tts.tts_to_file(text="Hello world!", speaker_wav="my/cloning/audio.wav", language="en", file_path="output.wav")
# ❗ XTTS supports both, but many models allow only one of the `speaker` and
# `speaker_wav` arguments

# TTS with list of amplitude values as output, clone the voice from `speaker_wav`
wav = tts.tts(
text="Hello world!",
speaker_wav="my/cloning/audio.wav",
language="en"
)

# TTS to a file, use a preset speaker
tts.tts_to_file(
text="Hello world!",
speaker="Craig Gutsy",
language="en",
file_path="output.wav"
)
```

#### Running a single speaker model
#### Single speaker model

```python
# Init TTS with the target model name
tts = TTS(model_name="tts_models/de/thorsten/tacotron2-DDC", progress_bar=False).to(device)
# Initialize TTS with the target model name
tts = TTS("tts_models/de/thorsten/tacotron2-DDC").to(device)

# Run TTS
tts.tts_to_file(text="Ich bin eine Testnachricht.", file_path=OUTPUT_PATH)

# Example voice cloning with YourTTS in English, French and Portuguese
tts = TTS(model_name="tts_models/multilingual/multi-dataset/your_tts", progress_bar=False).to(device)
tts.tts_to_file("This is voice cloning.", speaker_wav="my/cloning/audio.wav", language="en", file_path="output.wav")
tts.tts_to_file("C'est le clonage de la voix.", speaker_wav="my/cloning/audio.wav", language="fr-fr", file_path="output.wav")
tts.tts_to_file("Isso é clonagem de voz.", speaker_wav="my/cloning/audio.wav", language="pt-br", file_path="output.wav")
```

#### Example voice conversion
#### Voice conversion (VC)

Converting the voice in `source_wav` to the voice of `target_wav`

```python
tts = TTS(model_name="voice_conversion_models/multilingual/vctk/freevc24", progress_bar=False).to("cuda")
tts.voice_conversion_to_file(source_wav="my/source.wav", target_wav="my/target.wav", file_path="output.wav")
tts = TTS("voice_conversion_models/multilingual/vctk/freevc24").to("cuda")
tts.voice_conversion_to_file(
source_wav="my/source.wav",
target_wav="my/target.wav",
file_path="output.wav"
)
```

Other available voice conversion models:
- `voice_conversion_models/multilingual/multi-dataset/openvoice_v1`
- `voice_conversion_models/multilingual/multi-dataset/openvoice_v2`

#### Example voice cloning together with the default voice conversion model.
#### Voice cloning by combining single speaker TTS model with the default VC model

This way, you can clone voices by using any model in 🐸TTS. The FreeVC model is
used for voice conversion after synthesizing speech.
Expand All @@ -252,7 +267,7 @@ tts.tts_with_vc_to_file(
)
```

#### Example text to speech using **Fairseq models in ~1100 languages** 🤯.
#### TTS using Fairseq models in ~1100 languages 🤯
For Fairseq models, use the following name format: `tts_models/<lang-iso_code>/fairseq/vits`.
You can find the language ISO codes [here](https://dl.fbaipublicfiles.com/mms/tts/all-tts-languages.html)
and learn about the Fairseq models [here](https://github.com/facebookresearch/fairseq/tree/main/examples/mms).
Expand All @@ -266,128 +281,126 @@ api.tts_to_file(
)
```

### Command-line `tts`
### Command-line interface `tts`

<!-- begin-tts-readme -->

Synthesize speech on the command line.

You can either use your trained model or choose a model from the provided list.

If you don't specify any models, then it uses a Tacotron2 English model trained
on LJSpeech.

#### Single Speaker Models

- List provided models:

```sh
tts --list_models
```
$ tts --list_models
```

- Get model info (for both tts_models and vocoder_models):

- Query by type/name:
The model_info_by_name uses the name as it from the --list_models.
```
$ tts --model_info_by_name "<model_type>/<language>/<dataset>/<model_name>"
```
For example:
```
$ tts --model_info_by_name tts_models/tr/common-voice/glow-tts
$ tts --model_info_by_name vocoder_models/en/ljspeech/hifigan_v2
```
- Query by type/idx:
The model_query_idx uses the corresponding idx from --list_models.

```
$ tts --model_info_by_idx "<model_type>/<model_query_idx>"
```
For example:
```
$ tts --model_info_by_idx tts_models/3
```
- Get model information. Use the names obtained from `--list_models`.
```sh
tts --model_info_by_name "<model_type>/<language>/<dataset>/<model_name>"
```
For example:
```sh
tts --model_info_by_name tts_models/tr/common-voice/glow-tts
tts --model_info_by_name vocoder_models/en/ljspeech/hifigan_v2
```

- Query info for model info by full name:
```
$ tts --model_info_by_name "<model_type>/<language>/<dataset>/<model_name>"
```
#### Single Speaker Models

- Run TTS with default models:
- Run TTS with the default model (`tts_models/en/ljspeech/tacotron2-DDC`):

```
$ tts --text "Text for TTS" --out_path output/path/speech.wav
```sh
tts --text "Text for TTS" --out_path output/path/speech.wav
```

- Run TTS and pipe out the generated TTS wav file data:

```
$ tts --text "Text for TTS" --pipe_out --out_path output/path/speech.wav | aplay
```sh
tts --text "Text for TTS" --pipe_out --out_path output/path/speech.wav | aplay
```

- Run a TTS model with its default vocoder model:

```
$ tts --text "Text for TTS" --model_name "<model_type>/<language>/<dataset>/<model_name>" --out_path output/path/speech.wav
```sh
tts --text "Text for TTS" \
--model_name "<model_type>/<language>/<dataset>/<model_name>" \
--out_path output/path/speech.wav
```

For example:

```
$ tts --text "Text for TTS" --model_name "tts_models/en/ljspeech/glow-tts" --out_path output/path/speech.wav
```sh
tts --text "Text for TTS" \
--model_name "tts_models/en/ljspeech/glow-tts" \
--out_path output/path/speech.wav
```

- Run with specific TTS and vocoder models from the list:
- Run with specific TTS and vocoder models from the list. Note that not every vocoder is compatible with every TTS model.

```
$ tts --text "Text for TTS" --model_name "<model_type>/<language>/<dataset>/<model_name>" --vocoder_name "<model_type>/<language>/<dataset>/<model_name>" --out_path output/path/speech.wav
```sh
tts --text "Text for TTS" \
--model_name "<model_type>/<language>/<dataset>/<model_name>" \
--vocoder_name "<model_type>/<language>/<dataset>/<model_name>" \
--out_path output/path/speech.wav
```

For example:

```
$ tts --text "Text for TTS" --model_name "tts_models/en/ljspeech/glow-tts" --vocoder_name "vocoder_models/en/ljspeech/univnet" --out_path output/path/speech.wav
```sh
tts --text "Text for TTS" \
--model_name "tts_models/en/ljspeech/glow-tts" \
--vocoder_name "vocoder_models/en/ljspeech/univnet" \
--out_path output/path/speech.wav
```

- Run your own TTS model (Using Griffin-Lim Vocoder):
- Run your own TTS model (using Griffin-Lim Vocoder):

```
$ tts --text "Text for TTS" --model_path path/to/model.pth --config_path path/to/config.json --out_path output/path/speech.wav
```sh
tts --text "Text for TTS" \
--model_path path/to/model.pth \
--config_path path/to/config.json \
--out_path output/path/speech.wav
```

- Run your own TTS and Vocoder models:

```
$ tts --text "Text for TTS" --model_path path/to/model.pth --config_path path/to/config.json --out_path output/path/speech.wav
--vocoder_path path/to/vocoder.pth --vocoder_config_path path/to/vocoder_config.json
```sh
tts --text "Text for TTS" \
--model_path path/to/model.pth \
--config_path path/to/config.json \
--out_path output/path/speech.wav \
--vocoder_path path/to/vocoder.pth \
--vocoder_config_path path/to/vocoder_config.json
```

#### Multi-speaker Models

- List the available speakers and choose a <speaker_id> among them:
- List the available speakers and choose a `<speaker_id>` among them:

```
$ tts --model_name "<language>/<dataset>/<model_name>" --list_speaker_idxs
```sh
tts --model_name "<language>/<dataset>/<model_name>" --list_speaker_idxs
```

- Run the multi-speaker TTS model with the target speaker ID:

```
$ tts --text "Text for TTS." --out_path output/path/speech.wav --model_name "<language>/<dataset>/<model_name>" --speaker_idx <speaker_id>
```sh
tts --text "Text for TTS." --out_path output/path/speech.wav \
--model_name "<language>/<dataset>/<model_name>" --speaker_idx <speaker_id>
```

- Run your own multi-speaker TTS model:

```
$ tts --text "Text for TTS" --out_path output/path/speech.wav --model_path path/to/model.pth --config_path path/to/config.json --speakers_file_path path/to/speaker.json --speaker_idx <speaker_id>
```sh
tts --text "Text for TTS" --out_path output/path/speech.wav \
--model_path path/to/model.pth --config_path path/to/config.json \
--speakers_file_path path/to/speaker.json --speaker_idx <speaker_id>
```

### Voice Conversion Models
#### Voice Conversion Models

```
$ tts --out_path output/path/speech.wav --model_name "<language>/<dataset>/<model_name>" --source_wav <path/to/speaker/wav> --target_wav <path/to/reference/wav>
```sh
tts --out_path output/path/speech.wav --model_name "<language>/<dataset>/<model_name>" \
--source_wav <path/to/speaker/wav> --target_wav <path/to/reference/wav>
```

<!-- end-tts-readme -->
Loading

0 comments on commit c1f5e90

Please sign in to comment.