Floating Point Inconsistency Error #99
Unanswered
RongkunZhou
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello, I'm trying to replicate the experiment detailed in the Colab on my own server. But I am encountering a floating point inconsistency error. I set the 'default_dtype' in 'tutorial.yaml' to 'float64'. Here are the details:
Environment:
NequIP: 0.5.5
LAMMPS: 27 Jun 2024
PyTorch: 1.11.0+cu113
CUDA: 11.3
Python: 3.8.10
libtorch: deps-1.11.0+cu113
GPU: RTX3080Ti
Error Message:
The error indicates that there is an inconsistency between the data types (Float vs Double) during an operation within the TorchScript interpreter. Is this issue related to the versions of the libraries I am using? How can I resolve the floating point inconsistency?
Thank you very much!
Beta Was this translation helpful? Give feedback.
All reactions