Skip to content

Commit

Permalink
merge conflict
Browse files Browse the repository at this point in the history
  • Loading branch information
jnsLs committed Sep 30, 2024
2 parents 62749d4 + b70361f commit fbc3d31
Show file tree
Hide file tree
Showing 70 changed files with 2,083 additions and 549 deletions.
25 changes: 25 additions & 0 deletions .github/workflows/black.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
name: Black Code Formatter

on: [push, pull_request]

jobs:
black:
runs-on: ubuntu-latest

steps:
- name: Checkout code
uses: actions/checkout@v2

- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: '3.x' # Specify the Python version you need

- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install black==24.4.2
- name: Run black
run: black --check .

2 changes: 1 addition & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -125,4 +125,4 @@ interfaces/lammps/examples/*/*.dat
interfaces/lammps/examples/*/deployed_model

# batchwise optimizer examples
examples/howtos/howto_batchwise_relaxations_outputs/*
examples/howtos/howto_batchwise_relaxations_outputs/*
2 changes: 1 addition & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
repos:
- repo: https://github.com/python/black
rev: 22.3.0
rev: 24.4.2
hooks:
- id: black
5 changes: 4 additions & 1 deletion docs/api/data.rst
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@ Atoms data
AtomsLoader
resolve_format
AtomsDataFormat
StratifiedSampler


Creation
Expand Down Expand Up @@ -45,4 +46,6 @@ Statistics
:nosignatures:
:template: classtemplate.rst

calculate_stats
calculate_stats
NumberOfAtomsCriterion
PropertyCriterion
13 changes: 13 additions & 0 deletions docs/userguide/configs.rst
Original file line number Diff line number Diff line change
Expand Up @@ -425,3 +425,16 @@ config and code. As an example, you can check out the package
which implements new classes and integrates them in a custom config to build generative
neural networks for molecules.

Internal precision of float32 matrix multiplications
====================================================

When utilizing GPUs, it can be advantageous to manage the precision of floating-point
and matrix multiplication. PyTorch Lightning offers the option to set floating-point
precision by defining the alias ``trainer.precision`` in the Hydra config file.
In addition to the explicit floating point precision, you can adjust the internal
precision for matrix multiplication operations with float32 by defining ``matmul_precision``
in the config file or the argument ``+matmul_precision`` when using the CLI.
It's important to note that this PyTorch variable applies only to NVIDIA GPUs with specific
capabilities, like A100, that allow the adjustment of internal precision for matrix multiplications.
Further details can be found in the PyTorch documentation for ``torch.set_float32_matmul_precision``.

132 changes: 70 additions & 62 deletions examples/howtos/howto_batchwise_relaxations.ipynb
Original file line number Diff line number Diff line change
@@ -1,5 +1,13 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "2ebf008e",
"metadata": {},
"source": [
"## <span style=\"color:red\">Batchwise structure optimization is deprecated and will be upgraded soon</span>\n"
]
},
{
"cell_type": "markdown",
"id": "d100d71c",
Expand All @@ -19,20 +27,20 @@
{
"cell_type": "code",
"execution_count": null,
"id": "3f71b17f",
"id": "68581ba5",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import shutil\n",
"#import os\n",
"#import shutil\n",
"\n",
"import torch\n",
"from ase.io import read\n",
"#import torch\n",
"#from ase.io import read\n",
"\n",
"import schnetpack as spk\n",
"from schnetpack import properties\n",
"from schnetpack.interfaces.ase_interface import AtomsConverter\n",
"from schnetpack.interfaces.batchwise_optimization import ASEBatchwiseLBFGS, BatchwiseCalculator"
"#import schnetpack as spk\n",
"#from schnetpack import properties\n",
"#from schnetpack.interfaces.ase_interface import AtomsConverter\n",
"#from schnetpack.interfaces.batchwise_optimization import ASEBatchwiseLBFGS, BatchwiseCalculator"
]
},
{
Expand All @@ -46,36 +54,36 @@
{
"cell_type": "code",
"execution_count": null,
"id": "7be235c2",
"id": "7f1bd733",
"metadata": {},
"outputs": [],
"source": [
"model_path = \"../../tests/testdata/md_ethanol.model\"\n",
"#model_path = \"../../tests/testdata/md_ethanol.model\"\n",
"\n",
"# set device\n",
"device = torch.device(\"cpu\")\n",
"## set device\n",
"#device = torch.device(\"cpu\")\n",
"\n",
"# load model\n",
"model = torch.load(model_path, map_location=device)\n",
"## load model\n",
"#model = torch.load(model_path, map_location=device)\n",
"\n",
"# define neighbor list\n",
"cutoff = model.representation.cutoff.item()\n",
"nbh_list=spk.transform.MatScipyNeighborList(cutoff=cutoff)\n",
"## define neighbor list\n",
"#cutoff = model.representation.cutoff.item()\n",
"#nbh_list=spk.transform.MatScipyNeighborList(cutoff=cutoff)\n",
"\n",
"# build atoms converter\n",
"atoms_converter = AtomsConverter(\n",
" neighbor_list=nbh_list,\n",
" device=device,\n",
")\n",
"## build atoms converter\n",
"#atoms_converter = AtomsConverter(\n",
"# neighbor_list=nbh_list,\n",
"# device=device,\n",
"#)\n",
"\n",
"# build calculator\n",
"calculator = BatchwiseCalculator(\n",
" model=model_path,\n",
" atoms_converter=atoms_converter,\n",
" device=device,\n",
" energy_unit=\"kcal/mol\",\n",
" position_unit=\"Ang\",\n",
")"
"## build calculator\n",
"#calculator = BatchwiseCalculator(\n",
"# model=model_path,\n",
"# atoms_converter=atoms_converter,\n",
"# device=device,\n",
"# energy_unit=\"kcal/mol\",\n",
"# position_unit=\"Ang\",\n",
"#)"
]
},
{
Expand All @@ -89,14 +97,14 @@
{
"cell_type": "code",
"execution_count": null,
"id": "2602ada3",
"id": "a3ebc6e9",
"metadata": {},
"outputs": [],
"source": [
"input_structure_file = \"../../tests/testdata/md_ethanol.xyz\"\n",
"#input_structure_file = \"../../tests/testdata/md_ethanol.xyz\"\n",
"\n",
"# load initial structures\n",
"ats = read(input_structure_file, index=\":\")"
"## load initial structures\n",
"#ats = read(input_structure_file, index=\":\")"
]
},
{
Expand All @@ -110,15 +118,15 @@
{
"cell_type": "code",
"execution_count": null,
"id": "f2e05da1",
"id": "28e377f4",
"metadata": {},
"outputs": [],
"source": [
"# define structure mask for optimization (True for fixed, False for non-fixed)\n",
"n_atoms = len(ats[0].get_atomic_numbers())\n",
"single_structure_mask = [False for _ in range(n_atoms)]\n",
"# expand mask by number of input structures (fixed atoms are equivalent for all input structures)\n",
"mask = single_structure_mask * len(ats)"
"## define structure mask for optimization (True for fixed, False for non-fixed)\n",
"#n_atoms = len(ats[0].get_atomic_numbers())\n",
"#single_structure_mask = [False for _ in range(n_atoms)]\n",
"## expand mask by number of input structures (fixed atoms are equivalent for all input structures)\n",
"#mask = single_structure_mask * len(ats)"
]
},
{
Expand All @@ -132,34 +140,34 @@
{
"cell_type": "code",
"execution_count": null,
"id": "e80e2433",
"id": "2532bb4a",
"metadata": {},
"outputs": [],
"source": [
"results_dir = \"./howto_batchwise_relaxations_outputs\"\n",
"if not os.path.exists(results_dir):\n",
" os.makedirs(results_dir)\n",
"#results_dir = \"./howto_batchwise_relaxations_outputs\"\n",
"#if not os.path.exists(results_dir):\n",
"# os.makedirs(results_dir)\n",
"\n",
"# Initialize optimizer\n",
"optimizer = ASEBatchwiseLBFGS(\n",
" calculator=calculator,\n",
" atoms=ats,\n",
" trajectory=\"./howto_batchwise_relaxations_outputs/relax_traj\",\n",
")\n",
"## Initialize optimizer\n",
"#optimizer = ASEBatchwiseLBFGS(\n",
"# calculator=calculator,\n",
"# atoms=ats,\n",
"# trajectory=\"./howto_batchwise_relaxations_outputs/relax_traj\",\n",
"#)\n",
"\n",
"# run optimization\n",
"optimizer.run(fmax=0.0005, steps=1000)"
"## run optimization\n",
"#optimizer.run(fmax=0.0005, steps=1000)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ce9e2ad3",
"id": "fb369782",
"metadata": {},
"outputs": [],
"source": [
"if os.path.exists(results_dir):\n",
" shutil.rmtree(results_dir)"
"#if os.path.exists(results_dir):\n",
"# shutil.rmtree(results_dir)"
]
},
{
Expand All @@ -173,17 +181,17 @@
{
"cell_type": "code",
"execution_count": null,
"id": "1ed93e7d",
"id": "78f81235",
"metadata": {},
"outputs": [],
"source": [
"# get list of optimized structures and properties\n",
"opt_atoms, opt_props = optimizer.get_relaxation_results()\n",
"## get list of optimized structures and properties\n",
"#opt_atoms, opt_props = optimizer.get_relaxation_results()\n",
"\n",
"for oatoms in opt_atoms:\n",
" print(oatoms.get_positions())\n",
"#for oatoms in opt_atoms:\n",
"# print(oatoms.get_positions())\n",
" \n",
"print(opt_props)"
"#print(opt_props)"
]
},
{
Expand Down
3 changes: 2 additions & 1 deletion examples/tutorials/tutorial_01_preparing_data.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,7 @@
" batch_size=10,\n",
" num_train=110000,\n",
" num_val=10000,\n",
" split_file='./split_qm9.npz', \n",
" transforms=[ASENeighborList(cutoff=5.)]\n",
")\n",
"qm9data.prepare_data()\n",
Expand Down Expand Up @@ -447,7 +448,7 @@
{
"cell_type": "markdown",
"source": [
"## Using you data for training\n",
"## Using your data for training\n",
"We have now used the class `ASEAtomsData` to create a new `ase` database for our custom data. `schnetpack.data.ASEAtomsData` is a subclass of `pytorch.data.Dataset` and could be utilized for training models with `pytorch`. However, we use `pytorch-lightning` to conveniently handle the training procedure for us. This requires us to wrap the dataset in a [LightningDataModule](https://lightning.ai/docs/pytorch/stable/data/datamodule.html). We provide a general purpose `AtomsDataModule` for atomic systems in `schnetpack.data.datamodule.AtomsDataModule`. The data module will handle the unit conversion, splitting, batching and the preprocessing of the data with `transforms`. We can instantiate the data module for our custom dataset with:"
],
"metadata": {
Expand Down
Loading

0 comments on commit fbc3d31

Please sign in to comment.