You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We have an application with a very small vocabulary (~100 words). With an almost trivial bigram model (as kenlm seems not to be able to make a unigram model), we see that decoder.decode() produces words that are not in the language model.
Is there some kind of fallback to letter decoding? Is there a way to turn this off?
Thanks!
The text was updated successfully, but these errors were encountered:
Hello,
We have an application with a very small vocabulary (~100 words). With an almost trivial bigram model (as kenlm seems not to be able to make a unigram model), we see that
decoder.decode()
produces words that are not in the language model.Is there some kind of fallback to letter decoding? Is there a way to turn this off?
Thanks!
The text was updated successfully, but these errors were encountered: