-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question about pretraining. #7
Comments
you data preprocessing steps could be relevant here as well. How do you preprocess or mix your data? |
I am using the default mixing from https://github.com/jzhang38/TinyLlama. |
Just wondering if the sharky loss will affect the final performance? |
If the loss curve isn't stable it might suggest something happened during training, yes it is likely to indicate a suboptimal outcome. |
Just wondering how tokenizer affect the final results, gpt tokenizer has a much larger vocabulary |
What's the hyperparameters? Did you use the ones from TinyLlama? Such as learning rate, etc. I am not very familiar with tinyllama |
Nope, I am using the one demonstrated in LLM360 |
Also if max_seq_len change will affect final results? |
i don't think so. this shouldn't affect too much unless there is a bug related to it. |
I attached my training loss below, the data we are using refers to LLM360's paper, we use less data starcode.
For each training epoch our data contains 30B arxiv , Book 57B, C4 197.67B, Refined-Web 665.01, StarCoder 150B, StackExchange 21.75B, Wikipedia 23.90B.
And the hyperparameter we are using the same as LLM360 demonstrated. And the max_seq_len is 4096 instead of 2048, tokenizer is gpt tokenzier.
We are using an opensource repo to run the experiment on H100 Node with 2048 global bsize.
Currently our model can only achieve around 10.5 PPL on the falcon dataset. which is much worse than LLM360 amber model (Around 8 PPL) and llama-2 (Around 8 PPL).
Just wondering what would be the possible reason that our model perform much worse?
The text was updated successfully, but these errors were encountered: