We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Commit example: giganticode/bohr@3dfd353
Even if the dependencies of the label model training stage don't change, rerunning the training reruns in a label model that has a different hash. The root cause is that when the model is serialized, some of its attribute (pytorch tensors) are serialized into a different sequence of bytes every time, still not sure about the reason. So I asked a question about this: https://discuss.pytorch.org/t/tensor-pickling-inconsistent-between-runs/122533 https://stackoverflow.com/questions/67710297/pytorch-tensor-pickling-inconsistent-between-runs
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Commit example:
giganticode/bohr@3dfd353
Even if the dependencies of the label model training stage don't change, rerunning the training reruns in a label model that has a different hash. The root cause is that when the model is serialized, some of its attribute (pytorch tensors) are serialized into a different sequence of bytes every time, still not sure about the reason. So I asked a question about this:
https://discuss.pytorch.org/t/tensor-pickling-inconsistent-between-runs/122533
https://stackoverflow.com/questions/67710297/pytorch-tensor-pickling-inconsistent-between-runs
The text was updated successfully, but these errors were encountered: