-
Notifications
You must be signed in to change notification settings - Fork 945
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
how to training the model? #102
Comments
I would like to know same. |
I would like to know same too |
Bumping this, would also like to see the training code and dataset |
for anyone wondering for fine tuning Shap-E or Point-E, here is another project Cap3D. The devs have provided code for fine tuning. cap3D/text-to-3D/finetune_shapE.py at main · crockwell/Cap3D |
Does the finetune_shapE.py code have the whole model of shap-e trained? It seems to me that the code only trains the transformer and diffusion parts, omitting the first two layers of cross attention and patch embedding. I'm not sure if I'm understanding correctly? I see that the data loaded during training is latent_code, how is this latent_code obtained? |
That is the point of "fine-tuning", through which just the weights of qkv projections are updated.
Through the "3D Encoder" as described at Fig.2 in the paper. |
Found untrained code? Where is the training code and raw data set?
The text was updated successfully, but these errors were encountered: