Releases: mozilla/DeepSpeech
v0.9.0-alpha.2
Bump VERSION to 0.9.0-alpha.2
v0.9.0-alpha.1
Bump VERSION to 0.9.0-alpha.1
v0.8.0-alpha.6
Bump VERSION to 0.8.0-alpha.6
v0.8.0-alpha.5
Bump VERSION to 0.8.0-alpha.5
v0.9.0-alpha.0
Merge pull request #3098 from lissyx/bump-0.9 Bump VERSION to 0.9.0-alpha.0
v0.8.0-alpha.4
Merge pull request #3097 from lissyx/update-0.8 Update 0.8
DeepSpeech 0.7.4
General
This is the 0.7.4 release of Deep Speech, an open speech-to-text engine. In accord with semantic versioning, this version is not backwards compatible with version 0.6.1 or earlier versions. This is a bugfix release and retains compatibility with the 0.7.0 models. All model files included here are identical to the ones in the 0.7.0 release. As with previous releases, this release includes the source code:
and the acoustic models:
deepspeech-0.7.4-models.pbmm
deepspeech-0.7.4-models.tflite.
The model with the ".pbmm" extension is memory mapped and thus memory efficient and fast to load. The model with the ".tflite" extension is converted to use TFLite, has post-training quantization enabled, and is more suitable for resource constrained environments.
The acoustic models were trained on American English and the pbmm model achieves an 5.97% word error rate on the LibriSpeech clean test corpus.
In addition we release the scorer:
deepspeech-0.7.4-models.scorer
which takes the place of the language model and trie in older releases.
We also include example audio files:
which can be used to test the engine, and checkpoint files:
deepspeech-0.7.4-checkpoint.tar.gz
which can be used as the basis for further fine-tuning.
Notable changes from the previous release
- Fix csv.DictWriter configuration on Windows in some importers (#3045)
- Reduce number of users of VERSION and GRAPH_VERSION symlinks to fix issues on Windows (#3043)
- Fix bug in ds_ctcdecoder SWIG definition which was causing wrapper objects to be leaked (#3049)
- Add support for read-only validation metrics (not affecting best validation checkpoint logic) (#3051)
- Fix some importers to report total imported audio duration alongside total input audio duration (#3054)
- Separate Dockerfile into one for training and one for building native client related tools (#3060)
- Add list of supported platforms to ReadTheDocs (#3065)
- Added third-party bindings for the Nim language (#3076)
- Avoid reinstalling TensorFlow package from PyPI when using Docker bases that already come with it (#3072)
- Refactor artifact caching mechanism in CI (#3069)
Training Regimen + Hyperparameters for fine-tuning
The hyperparameters used to train the model are useful for fine tuning. Thus, we document them here along with the training regimen, hardware used (a server with 8 Quadro RTX 6000 GPUs each with 24GB of VRAM), and our use of cuDNN RNN.
In contrast to previous releases, training for this release occurred in several phases each phase with a lower learning rate than the phase before it.
The initial phase used the hyperparameters:
train_files
Fisher, LibriSpeech, Switchboard, Common Voice English, and approximately 1700 hours of transcribed WAMU (NPR) radio shows explicitly licensed to use as training corpora.dev_files
LibriSpeech clean dev corpus.test_files
LibriSpeech clean test corpustrain_batch_size
128dev_batch_size
128test_batch_size
128n_hidden
2048learning_rate
0.0001dropout_rate
0.40epochs
125
The weights with the best validation loss were selected at the end of 125 epochs using --noearly_stop
.
The second phase was started using the weights with the best validation loss from the previous phase. This second phase used the same hyperparameters as the first but with the following changes:
learning_rate
0.00001epochs
100
The weights with the best validation loss were selected at the end of 100 epochs using --noearly_stop
.
Like the second, the third phase was started using the weights with the best validation loss from the previous phase. This third phase used the same hyperparameters as the second but with the following changes:
learning_rate
0.000005
The weights with the best validation loss were selected at the end of 100 epochs using --noearly_stop
. The model selected under this process was trained for a sum total of 732522 steps over all phases.
Subsequent to this the lm_optimizer.py
was used with the following parameters:
lm_alpha_max
5lm_beta_max
5n_trials
2400test_files
LibriSpeech clean dev corpus.
to determine the optimal lm_alpha
and lm_beta
with respect to the LibriSpeech clean dev corpus. This resulted in:
lm_alpha
0.931289039105002lm_beta
1.1834137581510284
Bindings
This release also includes a Python based command line tool deepspeech
, installed through
pip install deepspeech
Alternatively, quicker inference can be performed using a supported NVIDIA GPU on Linux. (See below to find which GPU's are supported.) This is done by instead installing the GPU specific package:
pip install deepspeech-gpu
On Linux, macOS and Windows, the DeepSpeech package does not use TFLite by default. A TFLite version of the package on those platforms is available as:
pip install deepspeech-tflite
Also, it exposes bindings for the following languages
-
Python (Versions 3.5, 3.6, 3.7 and 3.8) installed via
pip install deepspeech
Alternatively, quicker inference can be performed using a supported NVIDIA GPU on Linux. (See below to find which GPU's are supported.) This is done by instead installing the GPU specific package:
pip install deepspeech-gpu
On Linux (AMD64), macOS and Windows, the DeepSpeech package does not use TFLite by default. A TFLite version of the package on those platforms is available as:
pip install deepspeech-tflite
-
NodeJS (Versions 10.x, 11.x, 12.x, 13.x and 14.x) installed via
npm install deepspeech
Alternatively, quicker inference can be performed using a supported NVIDIA GPU on Linux. (See below to find which GPU's are supported.) This is done by instead installing the GPU specific package:
npm install deepspeech-gpu
On Linux (AMD64), macOS and Windows, the DeepSpeech package does not use TFLite by default. A TFLite version of the package on those platforms is available as:
npm install deepspeech-tflite
-
ElectronJS versions 5.0, 6.0, 6.1, 7.0, 7.1, 8.0 and 9.0 are also supported
-
C which requires the appropriate shared objects are installed from
native_client.tar.xz
(See the section in the main README which describesnative_client.tar.xz
installation.) -
.NET which is installed by following the instructions on the NuGet package page.
In addition there are third party bindings that are supported by external developers, for example
- Rust which is installed by following the instructions on the external Rust repo.
- Go which is installed by following the instructions on the external Go repo.
- V which is installed by following the instructions on the external Vlang repo.
Supported Platforms
- Windows 8.1, 10, and Server 2012 R2 64-bits (Needs at least AVX support, requires
Redistribuable Visual C++ 2015 Update 3 (64-bits)
for runtime). - OS X 10.10, 10.11, 10.12, 10.13, 10.14 and 10.15
- Linux x86 64 bit with a modern CPU (Needs at least AVX/FMA)
- Linux x86 64 bit with a modern CPU + NVIDIA GPU (Compute Capability at least 3.0, see NVIDIA docs)
- Raspbian Buster on Raspberry Pi 3 + Raspberry Pi 4
- ARM64 built against Debian/ARMbian Buster and tested on LePotato boards
- Java Android bindings / demo app. Early preview, tested only on Pixel 2 device, TF Lite model only.
Documentation
Documentation is available on deepspeech.readthedocs.io.
Con...
v0.8.0-alpha.3
Merge branch 'make-0.8-alpha.3' into r0.8
DeepSpeech 0.7.3
General
This is the 0.7.3 release of Deep Speech, an open speech-to-text engine. In accord with semantic versioning, this version is not backwards compatible with version 0.6.1 or earlier versions. This is a bugfix release and retains compatibility with the 0.7.0 models. All model files included here are identical to the ones in the 0.7.0 release. As with previous releases, this release includes the source code:
and the acoustic models:
deepspeech-0.7.3-models.pbmm
deepspeech-0.7.3-models.tflite.
The model with the ".pbmm" extension is memory mapped and thus memory efficient and fast to load. The model with the ".tflite" extension is converted to use TFLite, has post-training quantization enabled, and is more suitable for resource constrained environments.
The acoustic models were trained on American English and the pbmm model achieves an 5.97% word error rate on the LibriSpeech clean test corpus.
In addition we release the scorer:
deepspeech-0.7.3-models.scorer
which takes the place of the language model and trie in older releases.
We also include example audio files:
which can be used to test the engine, and checkpoint files:
deepspeech-0.7.3-checkpoint.tar.gz
which can be used as the basis for further fine-tuning.
Notable changes from the previous release
- Bug fix - test_csvs argument was ignored (#2994)
- Convert path to str to fix Python 3.5 compat (#3025)
- Added support for NodeJS v14 and ElectronJS v9.0 (#3027)
- Improve error handling around Scorer (#2998)
- Windows support in setup.py decoder wheel installation (#3001)
- Fix JS IntermediateDecodeWithMetadata binding (#3011)
- Switch index.js to TypeScript (#3012)
- Return raw scores in confidence value (#3021)
Training Regimen + Hyperparameters for fine-tuning
The hyperparameters used to train the model are useful for fine tuning. Thus, we document them here along with the training regimen, hardware used (a server with 8 Quadro RTX 6000 GPUs each with 24GB of VRAM), and our use of cuDNN RNN.
In contrast to previous releases, training for this release occurred in several phases each phase with a lower learning rate than the phase before it.
The initial phase used the hyperparameters:
train_files
Fisher, LibriSpeech, Switchboard, Common Voice English, and approximately 1700 hours of transcribed WAMU (NPR) radio shows explicitly licensed to use as training corpora.dev_files
LibriSpeech clean dev corpus.test_files
LibriSpeech clean test corpustrain_batch_size
128dev_batch_size
128test_batch_size
128n_hidden
2048learning_rate
0.0001dropout_rate
0.40epochs
125
The weights with the best validation loss were selected at the end of 125 epochs using --noearly_stop
.
The second phase was started using the weights with the best validation loss from the previous phase. This second phase used the same hyperparameters as the first but with the following changes:
learning_rate
0.00001epochs
100
The weights with the best validation loss were selected at the end of 100 epochs using --noearly_stop
.
Like the second, the third phase was started using the weights with the best validation loss from the previous phase. This third phase used the same hyperparameters as the second but with the following changes:
learning_rate
0.000005
The weights with the best validation loss were selected at the end of 100 epochs using --noearly_stop
. The model selected under this process was trained for a sum total of 732522 steps over all phases.
Subsequent to this the lm_optimizer.py
was used with the following parameters:
lm_alpha_max
5lm_beta_max
5n_trials
2400test_files
LibriSpeech clean dev corpus.
to determine the optimal lm_alpha
and lm_beta
with respect to the LibriSpeech clean dev corpus. This resulted in:
lm_alpha
0.931289039105002lm_beta
1.1834137581510284
Bindings
This release also includes a Python based command line tool deepspeech
, installed through
pip install deepspeech
Alternatively, quicker inference can be performed using a supported NVIDIA GPU on Linux. (See below to find which GPU's are supported.) This is done by instead installing the GPU specific package:
pip install deepspeech-gpu
On Linux, macOS and Windows, the DeepSpeech package does not use TFLite by default. A TFLite version of the package on those platforms is available as:
pip install deepspeech-tflite
Also, it exposes bindings for the following languages
-
Python (Versions 3.5, 3.6, 3.7 and 3.8) installed via
pip install deepspeech
Alternatively, quicker inference can be performed using a supported NVIDIA GPU on Linux. (See below to find which GPU's are supported.) This is done by instead installing the GPU specific package:
pip install deepspeech-gpu
On Linux (AMD64), macOS and Windows, the DeepSpeech package does not use TFLite by default. A TFLite version of the package on those platforms is available as:
pip install deepspeech-tflite
-
NodeJS (Versions 10.x, 11.x, 12.x, 13.x and 14.x) installed via
npm install deepspeech
Alternatively, quicker inference can be performed using a supported NVIDIA GPU on Linux. (See below to find which GPU's are supported.) This is done by instead installing the GPU specific package:
npm install deepspeech-gpu
On Linux (AMD64), macOS and Windows, the DeepSpeech package does not use TFLite by default. A TFLite version of the package on those platforms is available as:
npm install deepspeech-tflite
-
ElectronJS versions 5.0, 6.0, 6.1, 7.0, 7.1, 8.0 and 9.0 are also supported
-
C which requires the appropriate shared objects are installed from
native_client.tar.xz
(See the section in the main README which describesnative_client.tar.xz
installation.) -
.NET which is installed by following the instructions on the NuGet package page.
In addition there are third party bindings that are supported by external developers, for example
- Rust which is installed by following the instructions on the external Rust repo.
- Go which is installed by following the instructions on the external Go repo.
- V which is installed by following the instructions on the external Vlang repo.
Supported Platforms
- Windows 8.1, 10, and Server 2012 R2 64-bits (Needs at least AVX support, requires
Redistribuable Visual C++ 2015 Update 3 (64-bits)
for runtime). - OS X 10.10, 10.11, 10.12, 10.13, 10.14 and 10.15
- Linux x86 64 bit with a modern CPU (Needs at least AVX/FMA)
- Linux x86 64 bit with a modern CPU + NVIDIA GPU (Compute Capability at least 3.0, see NVIDIA docs)
- Raspbian Buster on Raspberry Pi 3 + Raspberry Pi 4
- ARM64 built against Debian/ARMbian Buster and tested on LePotato boards
- Java Android bindings / demo app. Early preview, tested only on Pixel 2 device, TF Lite model only.
Documentation
Documentation is available on deepspeech.readthedocs.io.
Contact/Getting Help
- FAQ - We have a list of common questions, and their answers, in our FAQ. When just getting started, it's best to first check the FAQ to see if your question is addressed.
- Discourse Forums - If your question is not addressed in the FAQ, the Discourse Forums is the next place to look. They contain conversations on General Topics, [Using Deep S...