Skip to content

Commit

Permalink
Merge remote-tracking branch 'upstream/development' into doc_updates
Browse files Browse the repository at this point in the history
  • Loading branch information
roelof-groenewald committed Sep 19, 2024
2 parents 367f97e + 180245e commit 76dc7ea
Show file tree
Hide file tree
Showing 162 changed files with 4,309 additions and 3,527 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/cuda.yml
Original file line number Diff line number Diff line change
Expand Up @@ -131,7 +131,7 @@ jobs:
which nvcc || echo "nvcc not in PATH!"
git clone https://github.com/AMReX-Codes/amrex.git ../amrex
cd ../amrex && git checkout --detach 216ce6f37de4b65be57fc1006b3457b4fc318e03 && cd -
cd ../amrex && git checkout --detach 028638564f7be0694b9898f8d4088cdbf9a6f9f5 && cd -
make COMP=gcc QED=FALSE USE_MPI=TRUE USE_GPU=TRUE USE_OMP=FALSE USE_FFT=TRUE USE_CCACHE=TRUE -j 4
ccache -s
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/insitu.yml
Original file line number Diff line number Diff line change
Expand Up @@ -101,8 +101,8 @@ jobs:
cmake --build build -j 10
- name: 2D Test
run: |
cp Examples/Tests/ionization/inputs_test_2d_ionization_lab .
cp Examples/Tests/ionization/catalyst_pipeline.py .
cp Examples/Tests/field_ionization/inputs_test_2d_ionization_lab .
cp Examples/Tests/field_ionization/catalyst_pipeline.py .
mpiexec -n 2 ./build/bin/warpx.2d \
inputs_test_2d_ionization_lab \
catalyst.script_paths = catalyst_pipeline.py\
Expand Down
2 changes: 1 addition & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ repos:
# Python: Ruff linter & formatter
# https://docs.astral.sh/ruff/
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.6.4
rev: v0.6.5
hooks:
# Run the linter
- id: ruff
Expand Down
136 changes: 52 additions & 84 deletions Docs/source/dataanalysis/catalyst.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,28 +9,25 @@ visualization and analysis capabilities, which is what this document will focus

Enabling Catalyst
-----------------
In order to use Catalyst with WarpX, you must `build Catalyst 2 <https://catalyst-in-situ.readthedocs.io/en/latest/build_and_install.html>`_ and `build <https://github.com/Kitware/ParaView/blob/master/Documentation/dev/build.md>`__ or `install <https://www.paraview.org/download/>`__ ParaView 5.9+. Afterward, AMReX must be built with ``AMReX_CONDUIT=TRUE``,
``AMReX_CATALYST=TRUE``, ``Conduit_DIR=/path/to/conduit``, and ``Catalyst_DIR=/path/to/catalyst`` (``/path/to/catalyst`` should be the directory containing ``catalyst-config.cmake``, not the path to the implementation).

Once AMReX is appropriately built, WarpX can be built with the following options:

.. code-block:: cmake
WarpX_amrex_internal=FALSE
AMReX_DIR="/path/to/amrex/build"
If they cannot be found, ``Conduit_DIR`` and ``Catalyst_DIR`` will have to be set again. Ensure that AMReX is built with all required options, some common ones being:

.. code-block:: cmake
AMReX_MPI=TRUE
AMReX_MPI_THREAD_MULTIPLE=TRUE
AMReX_LINEAR_SOLVERS=TRUE
AMReX_PARTICLES=TRUE
AMReX_PARTICLES_PRECISION=DOUBLE
AMReX_PIC=TRUE
AMReX_TINY_PROFILE=TRUE
In order to use Catalyst with WarpX, we need to ensure that we will be using the same version of
conduit across all libraries i.e Catalyst, AMReX and ParaView. One way to achieve this is to
build conduit externally and use it for compiling all the above packages.
This ensures compatibility when passing conduit nodes between WarpX and ParaView.

First, we build
`Conduit <https://llnl-conduit.readthedocs.io/en/latest/building.html>`_ and then
build `Catalyst 2 <https://catalyst-in-situ.readthedocs.io/en/latest/build_and_install.html>`_
using the conduit library created in the previous step.
The latter can be achieved by adding the installation path of conduit to the environmental
variable `CMAKE_PREFIX_PATH` and setting `CATALYST_WITH_EXTERNAL_CONDUIT=ON` during the configuration step of Catalyst.

Then we build ParaView master (on a commit after 2024.07.01, tested on ``4ef351a54ff747ef7169e2e52e77d9703a9dfa77``) following the developer instructions provided
`here <https://github.com/Kitware/ParaView/blob/master/Documentation/dev/build.md>`__ .
A representative set of options for a headless ParaView installation is provided
`here <https://gitlab.kitware.com/christos.tsolakis/catalyst-amrex-docker-images/-/blob/ci-catalyst-amrex-warpx-20240701/docker/ubuntu22_04/install_paraview.sh>`__
Afterward, WarpX must be built with ``WarpX_CATALYST=ON``.
Also, make sure to provide the installed paths of Conduit and Catalyst via
`CMAKE_PREFIX_PATH` before configuring WarpX.

Inputs File Configuration
-------------------------
Expand All @@ -43,6 +40,10 @@ In addition to configuring the diagnostics, the following parameters must be inc
* ``catalyst.implementation`` (default ``paraview``): The name of the implementation being used (case sensitive).
* ``catalyst.implementation_search_paths``: The locations to search for the given implementation. The specific file being searched for will be ``catalyst_{implementation}.so``.

The latter two can also be given via the environmental variables
`CATALYST_IMPLEMENTATION_NAME` and `CATALYST_IMPLEMENTATION_PATHS`
respectively.

Because the scripts and implementations are global, Catalyst does not benefit from nor differentiate between multiple diagnostics.


Expand All @@ -53,66 +54,10 @@ Catalyst uses the files specified in ``catalyst.script_paths`` to run all analys
The following script, :code:`simple_catalyst_pipeline.py`, automatically detects the type of data for both the mesh and particles, then creates an extractor for them. In most
cases, these will be saved as ``.VTPC`` files which can be read with the ``XML Partitioned Dataset Collection Reader``.

.. code-block:: python
from paraview.simple import *
from paraview import catalyst
# Helper function
def create_extractor(data_node, filename="Dataset"):
VTK_TYPES = ["vtkImageData", "vtkRectilinearGrid", "vtkStructuredGrid", "vtkPolyData", "vtkUnstructuredGrid", "vtkUniformGridAMR", "vtkMultiBlockDataSet", "vtkPartitionedDataSet", "vtkPartitionedDataSetCollection", "vtkHyperTreeGrid"]
FILE_ASSOCIATIONS = ["VTI", "VTR", "VTS", "VTP", "VTU", "VTH", "VTM", "VTPD", "VTPC", "HTG"]
clientside_data = data_node.GetClientSideObject().GetOutputDataObject(0) # Gets the dataobject from the default output port
# Loop is required because .IsA() detects valid classes that inherit from the VTK_TYPES
for i, vtk_type in enumerate(VTK_TYPES):
if (clientside_data.IsA(vtk_type)):
filetype = FILE_ASSOCIATIONS[i]
extractor = CreateExtractor(filetype, data_node, registrationName=f"_{filetype}")
extractor.Writer.FileName = filename + "_{timestep:}" + f".{filetype}"
return extractor
raise RuntimeError(f"Unsupported data type: {clientside_data.GetClassName()}")
.. literalinclude:: catalyst/catalyst_simple_pipeline.py
:language: python
:caption: You can copy this file from ``Docs/source/dataanalysis/catalyst/catalyst_simple_pipeline.py``.

# Camera settings
paraview.simple._DisableFirstRenderCameraReset() # Prevents the camera from being shown
# Options
options = catalyst.Options()
options.CatalystLiveTrigger = "TimeStep" # "Python", "TimeStep", "TimeValue"
options.EnableCatalystLive = 0 # 0 (disabled), 1 (enabled)
if (options.EnableCatalystLive == 1):
options.CatalystLiveURL = "localhost:22222" # localhost:22222 is default
options.ExtractsOutputDirectory = "datasets" # Base for where all files are saved
options.GenerateCinemaSpecification = 0 # 0 (disabled), 1 (enabled), generates additional descriptor files for cinema exports
options.GlobalTrigger = "TimeStep" # "Python", "TimeStep", "TimeValue"
meshSource = PVTrivialProducer(registrationName="mesh") # "mesh" is the node where the mesh data is stored
create_extractor(meshSource, filename="meshdata")
particleSource = PVTrivialProducer(registrationName="particles") # "particles" is the node where particle data is stored
create_extractor(particleSource, filename="particledata")
# Called on catalyst initialize (after Cxx side initialize)
def catalyst_initialize():
return
# Called on catalyst execute (after Cxx side update)
def catalyst_execute(info):
print(f"Time: {info.time}, Timestep: {info.timestep}, Cycle: {info.cycle}")
return
# Callback if global trigger is set to "Python"
def is_activated(controller):
return True
# Called on catalyst finalize (after Cxx side finalize)
def catalyst_finalize():
return
if __name__ == '__main__':
paraview.simple.SaveExtractsUsingCatalystOptions(options)


For the case of ParaView Catalyst, pipelines are run with ParaView's included ``pvbatch`` executable and use the ``paraview`` library to modify the data. While pipeline scripts
Expand Down Expand Up @@ -159,9 +104,32 @@ Steps one is advised so that proper scaling and framing can be done, however in

Replay
------
Catalyst 2.0 supports generating binary data dumps for the conduit nodes passed to each ``catalyst_`` call at each iteration. This allows to debug/adapt catalyst scripts without having to rerun the simulation each time.

To generate the data dumps one must first set the environmental variable ``CATALYST_DATA_DUMP_DIRECTORY`` to the path where the dumps should be saved. Then, run the simulation as normal but replace ``catalyst.implementation=stub`` either in the calling script of WarpX or as an additional argument.

This will run the simulation and write the conduit nodes under ``CATALYST_DATA_DUMP_DIRECTORY``.

Afterward, one can replay the generated nodes by setting up the `CATALYST_IMPLEMENTATION_*` variables for the `catalyst_replay` executable (which can be found in the catalyst build directory) appropriately. For example:

.. code-block:: bash
Catalyst 2 supports replay capabilities, which can be read about `here <https://catalyst-in-situ.readthedocs.io/en/latest/catalyst_replay.html>`_.
# dump conduit nodes
export CATALYST_DATA_DUMP_DIRECTORY=./raw_data
mpiexec -n N <WarpX build directory>/bin/warpx.2d ./inputs_2d catalyst.script_paths=catalyst_pipeline.py catalyst.implementation="stub"
# validate that files have been written
ls ./raw_data/
... many files of the format XXXX.conduit_bin.Y.Z
.. note::
# replay them
export CATALYST_IMPLEMENTATION_NAME=paraview
export CATALYST_IMPLEMENTATION_PATHS=<paraview install path>/lib/catalyst
export CATALYST_IMPLEMENTATION_PREFER_ENV=YES
export CATALYST_DEBUG=1 # optional but helps to make sure the right paths are used
export PYTHONPATH=${PYTHONPATH}/$(pwd) # or the path containing catalyst_pipeline.py in general
# N needs to be the same as when we generated the dump
mpiexec -n N <catalyst install path>/bin/catalyst_replay ./raw_data
# check extractor output e.g
ls ./datasets/
* TODO: Add more extensive documentation on replay
For more information see the documentation for catalyst replay `here <https://catalyst-in-situ.readthedocs.io/en/latest/catalyst_replay.html>`__ .
101 changes: 101 additions & 0 deletions Docs/source/dataanalysis/catalyst/catalyst_simple_pipeline.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,101 @@
from paraview import catalyst
from paraview.simple import * # noqa: F403


# Helper function
def create_data_extractor(data_node, filename="Dataset"):
"""Creates a data extractor that saves `data_node` to a datafile named `filename`.
The filetype is chosen based on the type of `data_note`.
Note: no rendering is performed by such an extractor. The data are
written directly to a file via VTK.
"""
VTK_TYPES = [
"vtkImageData",
"vtkRectilinearGrid",
"vtkStructuredGrid",
"vtkPolyData",
"vtkUnstructuredGrid",
"vtkUniformGridAMR",
"vtkMultiBlockDataSet",
"vtkPartitionedDataSet",
"vtkPartitionedDataSetCollection",
"vtkHyperTreeGrid",
]
FILE_ASSOCIATIONS = [
"VTI",
"VTR",
"VTS",
"VTP",
"VTU",
"VTH",
"VTM",
"VTPD",
"VTPC",
"HTG",
]
clientside_data = data_node.GetClientSideObject().GetOutputDataObject(
0
) # Gets the dataobject from the default output port

# Loop is required because .IsA() detects valid classes that inherit from the VTK_TYPES
for i, vtk_type in enumerate(VTK_TYPES):
if clientside_data.IsA(vtk_type):
filetype = FILE_ASSOCIATIONS[i]
extractor = CreateExtractor(
filetype, data_node, registrationName=f"_{filetype}"
)
extractor.Writer.FileName = filename + "_{timestep:}" + f".{filetype}"
return extractor

raise RuntimeError(f"Unsupported data type: {clientside_data.GetClassName()}")


# Camera settings
paraview.simple._DisableFirstRenderCameraReset() # Prevents the camera from being shown

# Options
options = catalyst.Options()

options.CatalystLiveTrigger = "TimeStep" # "Python", "TimeStep", "TimeValue"
options.EnableCatalystLive = 0 # 0 (disabled), 1 (enabled)
if options.EnableCatalystLive == 1:
options.CatalystLiveURL = "localhost:22222" # localhost:22222 is default

options.ExtractsOutputDirectory = "datasets" # Base for where all files are saved
options.GenerateCinemaSpecification = 0 # 0 (disabled), 1 (enabled), generates additional descriptor files for cinema exports
options.GlobalTrigger = "TimeStep" # "Python", "TimeStep", "TimeValue"

meshSource = PVTrivialProducer(
registrationName="mesh"
) # "mesh" is the node where the mesh data is stored
create_extractor(meshSource, filename="meshdata")
particleSource = PVTrivialProducer(
registrationName="particles"
) # "particles" is the node where particle data is stored
create_extractor(particleSource, filename="particledata")


# Called on catalyst initialize (after Cxx side initialize)
def catalyst_initialize():
return


# Called on catalyst execute (after Cxx side update)
def catalyst_execute(info):
print(f"Time: {info.time}, Timestep: {info.timestep}, Cycle: {info.cycle}")
return


# Callback if global trigger is set to "Python"
def is_activated(controller):
return True


# Called on catalyst finalize (after Cxx side finalize)
def catalyst_finalize():
return


if __name__ == "__main__":
paraview.simple.SaveExtractsUsingCatalystOptions(options)
22 changes: 21 additions & 1 deletion Docs/source/developers/testing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -81,6 +81,24 @@ For easier debugging, it can be convenient to run the tests on your local machin
ctest --test-dir build -E laser_acceleration
* Sometimes two or more tests share a large number of input parameters and differ by a small set of options.
Such tests typically also share a base string in their names.
For example, you can find three different tests named ``test_3d_langmuir_multi``, ``test_3d_langmuir_multi_nodal`` and ``test_3d_langmuir_multi_picmi``.
In such a case, if you wish to run the test ``test_3d_langmuir_multi`` only, this can be done again with the ``-R`` regular `expression filter <https://regex101.com>`__ via

.. code-block:: sh
ctest --test-dir build -R "test_3d_langmuir_multi\..*"
Note that filtering with ``-R "test_3d_langmuir_multi"`` would include the additional tests that have the same substring in their name and would not be sufficient to isolate a single test.
Note also that the escaping ``\.`` in the regular expression is necessary in order to take into account the fact that each test is automatically appended with the strings ``.run``, ``.analysis`` and possibly ``.cleanup``.

* Run only tests not labeled with the ``slow`` label:

.. code-block:: sh
ctest --test-dir build -LE slow
Once the execution of CTest is completed, you can find all files associated with each test in its corresponding directory under ``build/bin/``.
For example, if you run the single test ``test_3d_laser_acceleration``, you can find all files associated with this test in the directory ``build/bin/test_3d_laser_acceleration/``.

Expand Down Expand Up @@ -155,7 +173,9 @@ A new test can be added by adding a corresponding entry in ``CMakeLists.txt`` as
If you need a new Python package dependency for testing, please add it in `Regression/requirements.txt <https://github.com/ECP-WarpX/WarpX/blob/development/Regression/requirements.txt>`__.

Sometimes, two tests share a large number of input parameters. The shared input parameters can be collected in a "base" input file that can be passed as a runtime parameter in the actual test input files through the parameter ``FILE``.
Sometimes two or more tests share a large number of input parameters. The shared input parameters can be collected in a "base" input file that can be passed as a runtime parameter in the actual test input files through the parameter ``FILE``.

If the new test is added in a new directory that did not exist before, please add the name of that directory with the command ``add_subdirectory`` in `Physics_applications/CMakeLists.txt <https://github.com/ECP-WarpX/WarpX/tree/development/Examples/Physics_applications/CMakeLists.txt>`__ or `Tests/CMakeLists.txt <https://github.com/ECP-WarpX/WarpX/tree/development/Examples/Tests/CMakeLists.txt>`__, depending on where the new test directory is located.

Naming conventions for automated tests
--------------------------------------
Expand Down
10 changes: 10 additions & 0 deletions Docs/source/highlights.rst
Original file line number Diff line number Diff line change
Expand Up @@ -90,6 +90,11 @@ Scientific works in laser-ion acceleration and laser-matter interaction.
Physical Review Research **6**, 033148, 2024.
`DOI:10.1103/PhysRevResearch.6.033148 <https://doi.org/10.1103/PhysRevResearch.6.033148>`__

#. Zaïm N, Sainte-Marie A, Fedeli L, Bartoli P, Huebl A, Leblanc A, Vay J-L, Vincenti H.
**Light-matter interaction near the Schwinger limit using tightly focused doppler-boosted lasers**.
Physical Review Letters **132**, 175002, 2024.
`DOI:10.1103/PhysRevLett.132.175002 <https://doi.org/10.1103/PhysRevLett.132.175002>`__

#. Knight B, Gautam C, Stoner C, Egner B, Smith J, Orban C, Manfredi J, Frische K, Dexter M, Chowdhury E, Patnaik A (2023).
**Detailed Characterization of a kHz-rate Laser-Driven Fusion at a Thin Liquid Sheet with a Neutron Detection Suite**.
High Power Laser Science and Engineering, 1-13, 2023.
Expand All @@ -110,6 +115,11 @@ Scientific works in laser-ion acceleration and laser-matter interaction.
Phys. Rev. Accel. Beams **25**, 093402, 2022.
`DOI:10.1103/PhysRevAccelBeams.25.093402 <https://doi.org/10.1103/PhysRevAccelBeams.25.093402>`__

#. Fedeli L, Sainte-Marie A, Zaïm N, Thévenet M, Vay J-L, Myers A, Quéré F, Vincenti H.
**Probing strong-field QED with Doppler-boosted PetaWatt-class lasers**.
Physical Review Letters **127**, 114801, 2021.
`DOI:10.1103/PhysRevLett.127.114801 <https://doi.org/10.1103/PhysRevLett.127.114801>`__


Particle Accelerator & Beam Physics
***********************************
Expand Down
2 changes: 1 addition & 1 deletion Docs/source/install/dependencies.rst
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ Optional dependencies include:
- `FFTW3 <http://www.fftw.org>`__: for spectral solver (PSATD or IGF) support when running on CPU or SYCL

- also needs the ``pkg-config`` tool on Unix
- `heFFTe 2.4.0+ <https://github.com/icl-utk-edu/heffte`__: for multi-node spectral solver (IGF) support
- `heFFTe 2.4.0+ <https://github.com/icl-utk-edu/heffte>`__: for multi-node spectral solver (IGF) support
- `BLAS++ <https://github.com/icl-utk-edu/blaspp>`__ and `LAPACK++ <https://github.com/icl-utk-edu/lapackpp>`__: for spectral solver (PSATD) support in RZ geometry
- `Boost 1.66.0+ <https://www.boost.org/>`__: for QED lookup tables generation support
- `openPMD-api 0.15.1+ <https://github.com/openPMD/openPMD-api>`__: we automatically download and compile a copy of openPMD-api for openPMD I/O support
Expand Down
Loading

0 comments on commit 76dc7ea

Please sign in to comment.