You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In theory, semi-continuous or PTM models are supposed to be fast! But training them is incredibly slow, especially the initial flat start. This is most likely due to some redundant or inefficient computation in the training code.
Training a 128-Gaussian PTM model on 100 hours of data on 16 CPUs takes approximately 4 hours, whereas training a 4000 senone, 16 Gaussian continuous model with LDA and MLLT takes only 1h25 (without LDA and MLLT it's less than an hour).
And of course the accuracy of said PTM model is quite atrocious.
One might argue that they are thoroughly obsolete, but actually they are maybe the only reason left to use CMU Sphinx, since they produce very small models.
The text was updated successfully, but these errors were encountered:
In theory, semi-continuous or PTM models are supposed to be fast! But training them is incredibly slow, especially the initial flat start. This is most likely due to some redundant or inefficient computation in the training code.
Training a 128-Gaussian PTM model on 100 hours of data on 16 CPUs takes approximately 4 hours, whereas training a 4000 senone, 16 Gaussian continuous model with LDA and MLLT takes only 1h25 (without LDA and MLLT it's less than an hour).
And of course the accuracy of said PTM model is quite atrocious.
One might argue that they are thoroughly obsolete, but actually they are maybe the only reason left to use CMU Sphinx, since they produce very small models.
The text was updated successfully, but these errors were encountered: