See the original repository https://github.com/Kyubyong/g2p for more information on English Grapheme to Phoneme conversion.
Other than removing unused dependencies and reorganizing the files, the original logic remains intact.
ttstokenizer
makes it easy to feed text to speech models with minimal dependencies that are Apache 2.0 compatible.
The standard preprocessing logic for many English Text to Speech (TTS) models is as follows:
- Apply Tacotron text normalization rules
- This project replicates the logic found in ESPnet
- Convert Graphemes to Phonemes
- Build an integer array mapping Phonemes to their integer token positions
This project adds new tokenizers that runs the logic above. The output is consumable by machine learning models.
The easiest way to install is via pip and PyPI
pip install ttstokenizer
This project has two supported tokenizers.
-
TTSTokenizer
- Tokenizes text to ARPABET phoenemes. Word to phoeneme definitions are provided by CMUdict. These phonemes are then mapped to token ids using a provided token - token id mapping. -
IPATokenizer
- Tokenizes text to International Phonetic Alphabet (IPA) phoenemes. The graphemes for each phoneme are mapped to token ids.
The IPATokenizer
is designed to be a drop in replacement for models that depend on eSpeak to tokenize text into IPA phoenemes.
An example of tokenizing text for each of the TTS models is shown below.
from ttstokenizer import TTSTokenizer
tokenizer = TTSTokenizer(tokens)
print(tokenizer("Text to tokenize"))
>>> array([ 4, 15, 10, 6, 4, 4, 28, 4, 34, 10, 2, 3, 51, 11])
from ttstokenizer import IPATokenizer
tokenizer = IPATokenizer()
print(tokenizer("Text to tokenize"))
>>> array([ 62 156 86 53 61 62 16 62 70 16 62 156 57 135 53 70 56 157 43 102 68])
Both tokenizers also support returning raw types to help with debugging.
The following returns ARPABET phonemes instead of token ids for the TTSTokenizer
.
from ttstokenizer import TTSTokenizer
tokenizer = TTSTokenizer()
print(tokenizer("Text to tokenize"))
>>> ['T', 'EH1', 'K', 'S', 'T', 'T', 'AH0', 'T', 'OW1', 'K', 'AH0', 'N', 'AY2', 'Z']
The same can be done with the IPATokenizer. The following returns the transcribed IPA tokens
from ttstokenizer import IPATokenizer
tokenizer = IPATokenizer(tokenize=False)
print(tokenizer("Text to tokenize"))
>>> "tˈɛkst tɐ tˈoʊkɐnˌaɪz"
The IPATokenizer
has an additional method to accept IPA tokens directly.
from ttstokenizer import IPATokenizer
tokenizer = IPATokenizer(transcribe=False)
print(tokenizer("tˈɛkst tɐ tˈoʊkɐnˌaɪz"))
>>> array([[ 62 156 86 53 61 62 16 62 70 16 62 156 57 135 53 70 56 157 43 102 68]])
Notice how the output is the same as above. When the output doesn't sound right, these methods can help trace what's going on.