Skip to content

flask+tornado based NVIDIA tacotron2+waveglow tts web app

Notifications You must be signed in to change notification settings

dpny518/flask-tacotron2-tts-web-app

 
 

Repository files navigation

Flask-Tacotron2-TTS-Web-App

This repo was forked from NVIDIA/Tacotron2 for inference test only (not for training).

Because I didn't know flask well, I forked CodeDem/flask-musing-streaming.

If you want to test NVIDIA Tacotron2 models in jupyter notebook, you better try inference model NVIDIA/Tacotron2 .

example

Installation

  1. Install PyTorch 1.0 (You Need NVIDIA CUDA GPUs!)

  2. pip install -r requirement.txt

  3. clone this repo: https://github.com/NVIDIA/waveglow.git

    or git submodule init; git submodule update

  4. you may need models tacotron2, waveglow both :

    1. NVIDIA/Tacotron2's model for inference demo: Tacotron 2 , WaveGlow

    2. or My trained models:

      Tacotron2: English_90k_steps(ljspeech dataset), Korean_162k_steps(kss dataset)

      Waveglow: waveglow_152k_steps using Korean dataset

Usage

python app.py

or You can test tts on console: python console_test.py

in config.json, you can change models' path.

Results

You can see Warning! Decoder Max on console.

In this case, your synthesized audio will have 11 seconds length and weired sounds.

This problems many happen in my korean trained model, but hardly happen in my english trained model.

I can't find any difference from synthesized audio between waveglow_256channels.pt(waveglow demo) and my waveglow_152k .

About

flask+tornado based NVIDIA tacotron2+waveglow tts web app

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 93.2%
  • HTML 6.8%