You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I was trying to train my own LLM on the encodec tokenizer and I wanted a bit of help. The LLM does not seem to learn the tokens and a drop-in replacement on SEED tokenizer works fine. The shape of the codes is [4, 250]. How do I format this in a sequence so it has causal dependency. Currently I have been doing it like the paper states. 4 codes per frame, and frame by frame. Is there something else I should look out for? In the outputs, it just repeats the 4 codes over and over again and the generated audio is just no sound.
The text was updated successfully, but these errors were encountered:
Hello, I was trying to train my own LLM on the encodec tokenizer and I wanted a bit of help. The LLM does not seem to learn the tokens and a drop-in replacement on SEED tokenizer works fine. The shape of the codes is [4, 250]. How do I format this in a sequence so it has causal dependency. Currently I have been doing it like the paper states. 4 codes per frame, and frame by frame. Is there something else I should look out for? In the outputs, it just repeats the 4 codes over and over again and the generated audio is just no sound.
The text was updated successfully, but these errors were encountered: