|
| 1 | +# Deepspeech Català |
| 2 | + |
| 3 | +An ASR model created with the Mozilla [DeepSpeech](https://github.com/mozilla/DeepSpeech) engine. (Jump to [english](#deepspeech-catalan-asr-model)) |
| 4 | + |
| 5 | +Model de reconeixement de la parla creat amb el motor [DeepSpeech](https://github.com/mozilla/DeepSpeech) de Mozilla. Us podeu descarregar l'última versió [aquí](https://github.com/ccoreilly/deepspeech-catala/releases). |
| 6 | + |
| 7 | +Pots provar el model enviant un missatge vocal al bot de Telegram [DeepSpeechCatalà](https://t.me/DeepSpeechCatalaBot) |
| 8 | + |
| 9 | +## Motivació |
| 10 | + |
| 11 | +La motivació principal és la d'aprendre, pel que el model evoluciona constantment a mida que vaig fent proves. També tenia curiositat per saber |
| 12 | +qué era possible amb el corpus lliure actual de [CommonVoice](https://voice.mozilla.org/ca/datasets) (la resposta hauria de motivar a tothom a contribuïr-hi encara més). |
| 13 | + |
| 14 | +## Com fer-lo servir |
| 15 | + |
| 16 | +Descarregueu-vos el model i l'scorer i feu servir el motor d'inferència deepspeech per a inferir el text d'un arxiu audio (16Hz mono WAV) |
| 17 | + |
| 18 | +``` |
| 19 | + |
| 20 | +$ deepspeech --model deepspeech-catala-0.6.0.pbmm --scorer kenlm.scorer --audio file.wav |
| 21 | +``` |
| 22 | + |
| 23 | +## Comparativa de models |
| 24 | + |
| 25 | +A continuació una comparativa de les diferents versions del model, el corpus emprat i el resultats de l'avaluació. |
| 26 | + |
| 27 | +Les versions anteriors a la 0.4.0 feien servir un alfabet sense vocals accentuades pel que no es consideren representatius. |
| 28 | + |
| 29 | +### Corpus d'avaluació ParlamentParla |
| 30 | + |
| 31 | +Nota: Per la versió 0.6.0 del model vaig combinar el corpus complet (train, dev i test) de CommonVoice amb el de [ParlamentParlaClean](https://collectivat.cat/asr) per després barrejar-lo i dividir-lo en tres sets: train (75%), dev (20%) i test(5%). D'aquesta manera s'ha augmentat el nombre de dades d'entrenament. Com que degut a això el set test conté dades del corpus CommonVoice que podrien haver estat emprades en l'entrenament dels altres models, s'han avaluat tots els models exclusivament amb 1713 frases que cap model ha mai vist (totes del corpus ParlamentParlaClean). |
| 32 | + |
| 33 | +| Model | Corpus | Dades augmentades? | WER | CER | Loss | |
| 34 | +| --------------------------------------------------------------------- | ------------------------------- | ------------------ | ------ | ------ | ------ | |
| 35 | +| [email protected] | CommonVoice | No | 30,16% | 13,79% | 112,96 | |
| 36 | +| [email protected] | CommonVoice | Sí | 29,66% | 13,84% | 108,52 | |
| 37 | +| [email protected] | CommonVoice+ParlamentParlaClean | No | 13,85% | 5,62% | 50,49 | |
| 38 | +| [stashify@deepspeech_cat](https://github.com/stashify/deepspeech_cat) | CommonVoice? | Sí | 22,62% | 13,59% | 80,45 | |
| 39 | + |
| 40 | +### Corpus d'avaluació [FestCat](http://festcat.talp.cat/devel.php) |
| 41 | + |
| 42 | +| Model | Corpus | Dades augmentades? | WER | CER | Loss | |
| 43 | +| --------------------------------------------------------------------- | ------------------------------- | ------------------ | ------ | ------ | ------ | |
| 44 | +| [email protected] | CommonVoice | No | 77,60% | 65,62% | 243,25 | |
| 45 | +| [email protected] | CommonVoice | Sí | 78,12% | 65,61% | 235,60 | |
| 46 | +| [email protected] | CommonVoice+ParlamentParlaClean | No | 76,10% | 65,16% | 240,69 | |
| 47 | +| [stashify@deepspeech_cat](https://github.com/stashify/deepspeech_cat) | CommonVoice? | Sí | 80,58% | 66,82% | 180,81 | |
| 48 | + |
| 49 | +Aquesta avaluació demostra com el models no generalitzen gaire bé. |
| 50 | + |
| 51 | +El corpus FestCat té una variablititat major pel que fa al nombre de paraules per frase, amb el 90% entre 2 i 23 paraules, mentre que en el corpus de CommonVoice la major part de les frases contenen entre 3 i 16 paraules. |
| 52 | + |
| 53 | +Com era d'esperar, avaluant els models només amb les frases del corpus d'avaluació que contenen 4 o més paraules el resultat millora: |
| 54 | + |
| 55 | +| Model | Corpus | Dades augmentades? | WER | CER | Loss | |
| 56 | +| --------------------------------------------------------------------- | ------------------------------- | ------------------ | ------ | ------ | ------ | |
| 57 | +| [email protected] | CommonVoice | No | 58,78% | 46,61% | 193,85 | |
| 58 | +| [email protected] | CommonVoice | Sí | 58,94% | 46,47% | 188,42 | |
| 59 | +| [email protected] | CommonVoice+ParlamentParlaClean | No | 56,68% | 46,00% | 189,03 | |
| 60 | +| [stashify@deepspeech_cat](https://github.com/stashify/deepspeech_cat) | CommonVoice? | Sí | 61,11% | 48,16% | 144,78 | |
| 61 | + |
| 62 | +## Possibles següents passos |
| 63 | + |
| 64 | +- Ampliar el corpus de dades d'entrenament |
| 65 | +- Optimitzar els paràmetres del model |
| 66 | +- Avaluar el model amb un corpus més variat (variants dialectals, soroll, context informal) |
| 67 | + |
| 68 | +# Deepspeech Catalan ASR Model |
| 69 | + |
| 70 | +## Motivation |
| 71 | + |
| 72 | +The main motivation of this project is to learn how to creat ASR models using Mozilla's DeepSpeech engine so the model is constantly evolving. Moreover I wanted to see what was possible with the currently released [CommonVoice](https://voice.mozilla.org/ca/datasets) catalan language dataset. |
| 73 | + |
| 74 | +## Usage |
| 75 | + |
| 76 | +Download the model and the scorer and use the deepspeech engine to infer text from an audio file (16Hz mono WAV) |
| 77 | + |
| 78 | +``` |
| 79 | + |
| 80 | +$ deepspeech --model deepspeech-catala-0.6.0.pbmm --scorer kenlm.scorer --audio file.wav |
| 81 | +``` |
| 82 | + |
| 83 | +## Model comparison |
| 84 | + |
| 85 | +What follows is a comparison of the different published model versions, the dataset used and the accuracy of each model. |
| 86 | + |
| 87 | +### Test corpus from ParlamentParla dataset |
| 88 | + |
| 89 | +Note: For version 0.6.0 the whole CommonVoice dataset (train, dev and test files) was combined with the clean dataset of ParlamentParla, shuffled and split in train/dev/test files using a 75/20/5 ratio. Due to this fact, a comparison between the models can only be made by using 1713 sentences from the ParlamentParla dataset not seen by any model during training. |
| 90 | + |
| 91 | +| Model | Corpus | Augmentation | WER | CER | Loss | |
| 92 | +| --------------------------------------------------------------------- | ------------------------------- | ------------ | ------ | ------ | ------ | |
| 93 | +| [email protected] | CommonVoice | No | 30,16% | 13,79% | 112,96 | |
| 94 | +| [email protected] | CommonVoice | Sí | 29,66% | 13,84% | 108,52 | |
| 95 | +| [email protected] | CommonVoice+ParlamentParlaClean | No | 13,85% | 5,62% | 50,49 | |
| 96 | +| [stashify@deepspeech_cat](https://github.com/stashify/deepspeech_cat) | CommonVoice? | Sí | 22,62% | 13,59% | 80,45 | |
| 97 | + |
| 98 | +### Test corpus from the [FestCat](http://festcat.talp.cat/devel.php) dataset |
| 99 | + |
| 100 | +| Model | Corpus | Augmentation | WER | CER | Loss | |
| 101 | +| --------------------------------------------------------------------- | ------------------------------- | ------------ | ------ | ------ | ------ | |
| 102 | +| [email protected] | CommonVoice | No | 77,60% | 65,62% | 243,25 | |
| 103 | +| [email protected] | CommonVoice | Sí | 78,12% | 65,61% | 235,60 | |
| 104 | +| [email protected] | CommonVoice+ParlamentParlaClean | No | 76,10% | 65,16% | 240,69 | |
| 105 | +| [stashify@deepspeech_cat](https://github.com/stashify/deepspeech_cat) | CommonVoice? | Sí | 80,58% | 66,82% | 180,81 | |
| 106 | + |
| 107 | +Validating the models against the FestCat dataset shows that the models do not generalize well. This corpus has a higer variability in the word count of the test sentences, with 90% of the sentences containing an evenly distributed amount of words between 2 and 23, whilst most of the sentences in the CommonVoice corpus contain between 3 and 16 words. |
| 108 | + |
| 109 | +As expected, validating the models against a test set containing only sentences with 4 or more words improves accuracy: |
| 110 | + |
| 111 | +| Model | Corpus | Augmentation | WER | CER | Loss | |
| 112 | +| --------------------------------------------------------------------- | ------------------------------- | ------------ | ------ | ------ | ------ | |
| 113 | +| [email protected] | CommonVoice | No | 58,78% | 46,61% | 193,85 | |
| 114 | +| [email protected] | CommonVoice | Sí | 58,94% | 46,47% | 188,42 | |
| 115 | +| [email protected] | CommonVoice+ParlamentParlaClean | No | 56,68% | 46,00% | 189,03 | |
| 116 | +| [stashify@deepspeech_cat](https://github.com/stashify/deepspeech_cat) | CommonVoice? | Sí | 61,11% | 48,16% | 144,78 | |
| 117 | + |
| 118 | +## Possible next steps |
| 119 | + |
| 120 | +- Expand the training data with other free datasets |
| 121 | +- Tune the model parameters to improve performance |
| 122 | +- Validate the models with more varied test datasets (dialects, noise) |
0 commit comments