You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I notice that doing inference with language model on large amount of texts can be quite slow. In particular, it took me 11 minutes to decode around 4600 lines of text. I wonder if pyctcdecode supports using GPU for acceleration ? At the moment, I see that the model's using CPU only, making it hard for batch decoding and such.
The text was updated successfully, but these errors were encountered:
I notice that doing inference with language model on large amount of texts can be quite slow. In particular, it took me 11 minutes to decode around 4600 lines of text. I wonder if pyctcdecode supports using GPU for acceleration ? At the moment, I see that the model's using CPU only, making it hard for batch decoding and such.
The text was updated successfully, but these errors were encountered: