Skip to content

Fastspeech with Squeezewave vocoder in pytorch , very fast inference on cpu

License

Notifications You must be signed in to change notification settings

manmay-nakhashi/Fastspeech_Squeezewave

Repository files navigation

FastSpeech-Squeezewave-Pytorch

The Implementation of FastSpeech Based on Pytorch.

Update

2019/10/23

  1. Fix bugs in alignment;
  2. Fix bugs in transformer;
  3. Fix bugs in LengthRegulator;
  4. Change the way to process audio;
  5. Use squeezewave to synthesize.

Model

Start

Dependencies

  • python 3.6
  • CUDA 10.0
  • pytorch==1.1.0
  • nump>=1.16.2
  • scipy>=1.2.1
  • librosa>=0.7.2
  • inflect>=2.1.0
  • matplotlib>=2.2.2

Prepare Dataset

  1. Download and extract LJSpeech dataset.
  2. Put LJSpeech dataset in data.
  3. Unzip alignments.zip *
  4. Put pretrained squeezewave model in the squeezewave/pretrained_model;
  5. Run python preprocess.py.

* if you want to calculate alignment, don't unzip alignments.zip and put Nvidia pretrained Tacotron2 model in the Tacotron2/pretrained_model

Training

Run python train.py.

Test

Run python synthesis.py "write your TTS Here".

Inference Time


Intel® Core™ i5-6300U CPU

example 1

taskset --cpu-list 1 python3 synthesis.py "Fastspeech with Squeezewave vocoder in pytorch , very fast inference on cpu"

Speech synthesis time: 1.7220683097839355


soxi out:
Input File : 'results/Fastspeech with Squeezewave vocoder in pytorch , very fast inference on cpu_112000_squeezewave.wav'
Channels : 1
Sample Rate : 22050
Precision : 16-bit
Duration : 00:00:05.96 = 131328 samples ~ 446.694 CDDA sectors
File Size : 263k
Bit Rate : 353k
Sample Encoding: 16-bit Signed Integer PCM
approx. 6 sec. audio output in 1.72 sec on single cpu


example 2
taskset --cpu-list 0 python3 synthesis.py "How are you"
Speech synthesis time: 0.3431851863861084
soxi out:
Input File : 'results/How are you _112000_squeezewave.wav'
Channels : 1
Sample Rate : 22050
Precision : 16-bit
Duration : 00:00:00.85 = 18688 samples ~ 63.5646 CDDA sectors
File Size : 37.4k
Bit Rate : 353k
Sample Encoding: 16-bit Signed Integer PCM
0.85 sec. audio output in 0.34 sec on single cpu

Pretrained Model

Notes

  • In the paper of FastSpeech, authors use pre-trained Transformer-TTS to provide the target of alignment. I didn't have a well-trained Transformer-TTS model so I use Tacotron2 instead.
  • The examples of audio are in results.
  • The outputs and alignment of Tacotron2 are shown as follows (The sentence for synthesizing is "I want to go to CMU to do research on deep learning."):
  • The outputs of FastSpeech and Tacotron2 (Right one is tacotron2) are shown as follows (The sentence for synthesizing is "Printing, in the only sense with which we are at present concerned, differs from most if not from all the arts and crafts represented in the Exhibition."):

Reference

About

Fastspeech with Squeezewave vocoder in pytorch , very fast inference on cpu

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages