Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement NNPOps Optimised ANI #21

Closed
wants to merge 7 commits into from
Closed

Conversation

not-matt
Copy link

@not-matt not-matt commented Feb 3, 2022

These changes simplify the use of NNPOps OptimisedTorchANI in OpenMM simulations, allowing ANI2x to be used with ~6x speedup on my machine. It also allows periodic boundary conditions again for CUDA (see aiqm/torchani#607 (comment))

  • Faster ANI for small, unsolvated systems
  • Allows large solvated systems to be run using mixed ANI/AMBER

Thanks to @jchodera for his example code openmm/openmm-torch#59 (comment)
Addresses #14

Demonstration
https://github.com/meyresearch/ANI-Peptides/blob/main/demos/ANI_minimal.ipynb

Important to note that this PR lacks an option to choose between classic TorchANI and OptimisedTorchANI, instead enforcing NNPops as the only method. I'd be happy to implement a way to choose between the two, but I'm not sure how this would best be implemented to fit this library's structure?

eg.

openmmml.MLPotential('ani2x_optimised')

or perhaps

openmmml.MLPotential('ani2x', optimised=True)

@raimis
Copy link

raimis commented Feb 3, 2022

Note that #20 tries to do the same.

@raimis
Copy link

raimis commented Feb 3, 2022

@not-matt, regarding the performance, could you run the benchmarks and compare with #20 (comment)

@not-matt
Copy link
Author

not-matt commented Feb 3, 2022

With the bench slightly modified to fix a couple of bugs (#20 (comment)), I got 0.0031 s/step using NNPops. This is in line with the results from that PR - the implementations are pretty much identical under the hood.

@raimis
Copy link

raimis commented Feb 4, 2022

Regarding the optimization keyword, we have 3 cases:

  • Pure PyTorch implementation of ANI-2x
openmmml.MLPotential('ani2x', optimise=None)
openmmml.MLPotential('ani2x', optimise='cuaev')
  • ANI-2x accelerated with NNPOps (by default)
openmmml.MLPotential('ani2x', optimise='nnpops')

I think, this will be the simplest API for a user.

Ping: @jchodera @peastman @yueyericardo @dominicrufa]

@peastman
Copy link
Member

peastman commented Feb 4, 2022

How about calling it implementation, since that's really what you're selecting: which implementation to use. I'd also suggest that if you want the default TorchANI implementation the value would be 'torchani' rather than None. That will be clearer.

@raimis
Copy link

raimis commented Feb 8, 2022

If no more comments, the API should look like this:

openmmml.MLPotential('ani2x', implementation='torchani')
openmmml.MLPotential('ani2x', implementation='cuaev')
openmmml.MLPotential('ani2x', implementation='nnpops') # default

@not-matt could you implement?

@not-matt
Copy link
Author

not-matt commented Feb 9, 2022

Yep, can do.

@raimis cuaev is enabled like so?

self.model.aev_computer.use_cuda_extension = True

@raimis
Copy link

raimis commented Feb 9, 2022

@raimis cuaev is enabled like so?

self.model.aev_computer.use_cuda_extension = True

@yueyericardo could you confirm?

@yueyericardo
Copy link

confirmed!

@raimis
Copy link

raimis commented Feb 17, 2022

@not-matt any progress?

@raimis raimis mentioned this pull request Feb 17, 2022
@not-matt
Copy link
Author

@raimis Yep just about there. Trying to test it thoroughly with each different optimisation option.

device = torch.device('cuda')
model = OptimizedTorchANI(model, species).to(device)
elif implementation == "cuaev":
self.model.aev_computer.use_cuda_extension = True
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doesn't work

@@ -82,14 +83,26 @@ def addForces(self,
if atoms is not None:
includedAtoms = [includedAtoms[i] for i in atoms]
atomic_numbers = [atom.element.atomic_number for atom in includedAtoms]
species = torch.tensor(atomic_numbers).unsqueeze(0)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

atomic_numbers is a more meaningful name for this variable.

@raimis
Copy link

raimis commented Feb 21, 2022

Also, it would be good to have a test to make sure things don't get broken.

@peastman
Copy link
Member

Is this ready to merge?

@raimis
Copy link

raimis commented Mar 29, 2022

I think, we still need some test to make sure the things keep working.

@peastman
Copy link
Member

peastman commented Jun 9, 2022

What else has to be done before we can merge this? It would be really nice if we could get it in before building the OpenMM 8.0 beta, so we can get beta users to try out the full stack including optimized kernels.

@peastman
Copy link
Member

This is superseded by #35.

@peastman peastman closed this Jul 14, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants