Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Supporting any Torch model (not just TorchScript) #131

Open
RaulPPelaez opened this issue Jan 30, 2024 · 3 comments
Open

Supporting any Torch model (not just TorchScript) #131

RaulPPelaez opened this issue Jan 30, 2024 · 3 comments

Comments

@RaulPPelaez
Copy link
Contributor

It would be great to be able to use a torch.compile'd model in OpenMM-Torch.

AFAIK there is no way to cross the Python-C++ barrier with a torch.compile'd model. Nothing like torch::jit::Module for TorchScript.

I can think of two solutions:

  1. Let TorchForce accept any Python class as module (as long as it has the right Tensor input/outputs to forward).
    Doing this requires sending a generic Python class/function to C++ through SWIG, which I have not been successful in doing. It is really easy with pybind, but I cannot manage to mix pybind and swig for the life of me.
  2. Take a TorchScript model like we do now, but internally call torch.compile.
    There is no way AFAIK to call torch.compile from C++, so we would have to invoke that via pybind at, perhaps, the TorchForce constructor. Then a py::object would be stored instead of a torch::jit::Module.

I think 2 is the simplest way with the current state of the codebase. Allowing something not easily serializable as the model (aka not TorchScript) would make serializing TorchForce an issue.

I would like to hear your thoughts on this!

@sef43
Copy link

sef43 commented Jan 30, 2024

here is a relevant PyTorch forum thread. At some point torch.export will exist as a option maybe
https://dev-discuss.pytorch.org/t/the-future-of-c-model-deployment/1282

Edit: maybe it is already possible, this looks promising: https://pytorch.org/docs/main/torch.compiler_aot_inductor.html

@peastman
Copy link
Member

Any solution that requires a Python runtime to be present will be very limiting. Think of Folding@home, for example.

It sounds like this is all still in flux, and there are important things torch.compile can't do yet. Hopefully once everything settles down, there will be a clear migration path. I really hope they continue to support jit compilation, though. Having to rely on ahead-of-time compiled libraries would also be very limiting, and likely infeasible for important use cases.

@RaulPPelaez
Copy link
Contributor Author

Steve's AOT thingy seems to be the only pytorch-endorsed way, but I have zero faith it is actually usable as of today -.-
I agree, Peter, things are super experimental still.
It is a same though to be able to, in the same Python script, run a torch.compiled model and run an OpenMM simulation but not mix the too. From an user perspective it is a bit frustrating.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants