Releases: tumaer/lagrangebench
Releases · tumaer/lagrangebench
2024-07-08
Added
- Dataset generation script now in
data_gen/
: (a)gns_data
moved in there and (b)lagrangebench_data
was added #30 - Better assertions and comments #31
- Note on missing determinism of GPU operations - see README.md #31
Fixed
Changed
- Update JAX to latest 0.4.29 #30
- Remove JAX-MD from dependencies. Now neighbor search happens through JAX-SPH #30
- [BREAKING]
cfg.dataset_path
becamecfg.dataset.src
#31
Full Changelog: v0.2.0a1...v0.2.0
2024-03-04 - Neural SPH
Added
- Dataset generation notebook #24
runner_test.py
,check_cfg
, LAGRANGEBENCH_DEFAULTS #28- Model checkpoints #29
Fixed
- Rollout extrapolation steps #25
Changed
- Zenodo URL to datasets with
force.py
file #26 - Move
experiments/*
content tolagrangebench/runner.py
andlagrangebench/defaults.py
#28 - Gonfigs from
argparse
toomegaconf
#28 - Trainer from function to
class
#28
Full Changelog: v0.1.2...v0.2.0a1
2024-01-11
Added
- Extended docs with (#17):
- Reference to notebooks.
- Baseline results from the NeurIPS 2023 paper.
- README, mainly #22:
- LagrangeBench logo.
- Clickable Badges with URLs to the paper, RTD, PyPI, Colab, and some git workflows.
- Contribution guidelines.
- Notes on MacOS and
jax-metal
, see #18.
- Tests, see #21
- Our tests are written using
unittest
, but we run them withpytest
. For now we keep that standard. - Currently, the tests cover roughly 70% of the codebase, namely:
- the
case_setup
including preprocessing and integration modules, - whether the equivariant models are equivariant,
- whether all 3 neighbor search backends give correct results on small edge cases,
- the fushforward utils, and
- the rollout loop by introducing a dummy 3D Lennard-Jones dataset of 3 particles for 2k steps.
- the
- Our tests are written using
- Github Workflows, mainly in #21:
- Linting checks with
ruff
. Ruff now replaces black. pytest
under Python 3.9, 3.10, 3.11 includingcodecov
.- Automatic publishing of tagged versions to PyPI.
- Linting checks with
- Batched rollout loop using
vmap
. Promises significant speedups, as validation during training used to take around 15%-30% of the time. And of course, batching during inference is nice to have. I noticed that there is an optimal batch size without changing the inference speed much, but there is a regime for larger batches where we don't get OOM, but validation becomes significantly slower. Tuning thisbatch_size_infer
parameter with a few test runs is my current best advice. See #20 and #21. pkl2vtk
to convert a pickle rollout to a series of .vtk files for visualization.- Metadata and configs in
pyproject.toml
and other config files, see #21.
Fixed
- Multiple neighbor list reallocations during training, see #15.
- When using both random noise and pushforward, the noise seed is now independent of the max number of pushfoward steps, see #16.
Changed
- Remove explicit force functions from the codebase and put them in
force.py
Python files in the dataset directory of the datasets with forces (2D DAM, 2D RPF, 3D RPF). This comes along with a new version of the datasets on Zenodo here https://doi.org/10.5281/zenodo.10491868, see #23. - Rename some variables and improve docstrings, see #17.
- Swap the order of
sender
andreceiver
to align with jax-md, see #17. - Upgrade dependencies and fix
jax==0.4.20
,jax-md==0.2.8
, ande3nn-jax==0.20.3
.
Full Changelog: v0.0.2...v0.1.2
NeurIPS 2023 D&B
Code used to generate the results for the NeurIPS 2023 Datasets & Benchmarks paper.
Extensively tested functionalities (on Ubuntu 22.04 with Python 3.10.12 and Poetry 1.6.0):
- training/inference using config files - as described in README
- running the 3 notebooks
2023-08-20
First release of lagrangebench
.