Skip to content

Picus303/BFA-forced-aligner

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

31 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

BFA Forced-Aligner (Text/Phoneme/Audio Alignment)

A CLI Python tool for text/audio alignment at word and phoneme level.
It supports both textual and phonetic input using either the IPA and Misaki phonesets.
The integrated G2P model supports both british and american English.
The final alignments are output in TextGrid format.

It's based on a RNN-T model (CNN/LSTM encoder + Transformer decoder) and was trained on 460 hours of audio from the LibriSpeech dataset.
The current architecture only supports audio clips up to about 17.5 seconds (see contributions).

No GPU is required to run this tool, but a CPU with lots of cores can help.

Installation

pip install BFA

Requires Python ≥ 3.12

Usage (CLI)

To align a corpus, two directories are expected:

  • One that contains all your audio files (.wav, .mp3, .flac and .pcm files only)
  • One that contains all your annotations (.txt and .lab files only)

You can find examples of such files in the example directory of this repository. A recursive search will be used so the only constraint is that both directories uses the same structure. If you use the same directory for both, then your .wav and .lab pairs should be in the same sub-directory.

bfa align \
  --audio-dir /path/to/audio_dir \
  --text-dir /path/to/text_dir \
  [--out-dir /path/to/out_dir] \
  [--dtype {words, phonemes}] \
  [--ptype {IPA, Misaki}] \
  [--language {EN-GB, EN-US}] \
  [--n-jobs N] \
  [--ignore-ram-usage] \
  [--config-path /path/to/config_file] \

Performances

Aligning the 460 hours of audio of the LibriSpeech dataset took 2h30 (realtime factor: x184) on a 8 cores / 16 threads CPU. Realtime factor on one core: x11.5.
1.5Go of RAM per thread are required (here, 24Go for 16 threads). By default, BFA will check your total RAM before starting jobs.
It successfully aligned more than 99% of the files.

To Do:

  • Test IPA ptype
  • Test Word dtype

Contributions

All contributions are welcomed but my main goal is the following:

Currently, the main limitation of this tool is it's context length (about 17.5 seconds) but RNN-T models can use a streaming implementation and this way handle files of arbitrary sizes. This would requires making the model causal (currently it isn't, in order to maximize accuracy) and write an inference function that can handle this method.

It would also be interesting to support .TextGrid files for annotations (input).

Licence

This project is licensed under the MIT License. See the LICENSE file for details.

About

CLI tool for RNN-T based text/audio forced alignment.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Languages