Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implicit Field Solve Preconditioner based on Curl-Curl Operator #5286

Merged
merged 31 commits into from
Nov 4, 2024

Conversation

debog
Copy link
Contributor

@debog debog commented Sep 18, 2024

Implemented a preconditioner for the implicit E-field solve using the AMReX curl-curl operator and the MLMG solver.

  • Introduced a Preconditioner base class that defines the action of a preconditioner for the JFNK algorithm.
  • Implemented the CurlCurlMLMGPC that uses the multigrid solution for the curl-curl operator (implemented in AMReX) to precondition the E-field JFNK solve.

Other changes needed for this:

  • Partially implemented a mapping between WarpX field boundary types and AMReX's linear operator boundary types.
  • Added some functionalities to ImplicitSolver class that allows preconditioners to access WarpX info (like Geometry, boundaries, etc).

Some premilinary wall times for:

Test: inputs_vandb_2d
  Grid: 160 X 160
  dt: 0.125/wpe = 2.22e-18 (dt_CFL = 7.84e-19 s, CFL = 2.83)
  Time iterations: 20

Solver parameters:
  newton.max_iterations = 10
  newton.relative_tolerance = 1.0e-12
  newton.absolute_tolerance = 0.0
  gmres.max_iterations = 1000
  gmres.relative_tolerance = 1.0e-8
  gmres.absolute_tolerance = 0.0

Avg GMRES iterations: ~3 (wPC), ~27 (noPC)

with 32^2 particles per cell:

Lassen (MPI + CUDA)
-------------------
  Box  GPU   Walltime (s)
             wPC       noPC
   1    1    2324.7    15004.1
   4    1    2306.8    14356.8
   4    4     758.9     3647.3

Dane (MPI + OMP)
----------------
  Box  CPU  Threads   Walltime (s)
                      wPC      noPC
   1    1      1      6709.3   43200.0*
   1    1      2      3279.1   22296.1
   1    1      4      1696.3   11613.2
   1    1      8      1085.0    6911.4
   1    1     16       724.3    4729.0
   4    1      1      5525.9   33288.8
  16    1      1      4419.4   28467.8
   4    4      1      1324.4    9121.1
  16   16      1       524.9    3658.8

* 43200.0 seconds is 12 hours (max job duration on Dane);
the simulation was almost done (started the 20th step).

with 10^2 particles per cell:

Lassen (MPI + CUDA)
-------------------
  Box  GPU   Walltime (s)
             wPC       noPC
   1    1    365.0     1443.5 
   4    1    254.1      927.8 
   4    4    133.1      301.5 

Dane (MPI + OMP)
----------------
  Box  CPU  Threads   Walltime (s)
                      wPC      noPC
   1    1      1      440.8    2360.5     
   1    1      2      241.7    1175.8 
   1    1      4      129.3     727.0 
   1    1      8       94.2     407.5 
   1    1     16       74.3     245.6 
   4    1      1      393.3    1932.5 
  16    1      1      337.6    1618.7 
   4    4      1       92.2     479.1 
  16   16      1       58.1     192.6 

@debog debog added enhancement New feature or request component: implicit solvers Anything related to implicit solvers labels Sep 18, 2024
@debog debog marked this pull request as ready for review September 19, 2024 19:05
…X is not yet implemented in 1D, added precompiler guardrails to not use it in 1D
Source/NonlinearSolvers/Preconditioner.H Show resolved Hide resolved
Source/NonlinearSolvers/Preconditioner.H Outdated Show resolved Hide resolved
Source/WarpX.H Outdated Show resolved Hide resolved
Source/NonlinearSolvers/CurlCurlMLMGPC.H Outdated Show resolved Hide resolved
Source/NonlinearSolvers/NewtonSolver.H Show resolved Hide resolved
Source/NonlinearSolvers/CurlCurlMLMGPC.H Outdated Show resolved Hide resolved
Source/NonlinearSolvers/CurlCurlMLMGPC.H Show resolved Hide resolved
@@ -85,6 +87,16 @@ public:
int a_nl_iter,
bool a_from_jacobian ) = 0;

[[nodiscard]] virtual amrex::Real theta () const { return 1.0; }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Making this a member of the base class is fine for now. In a future PR, we can remove it by passing in m_theta*a_dt to the solver. This will require modifications to several files in the ImplicitSolver folder and in the NonlinearSolvers folder, but it will make the code more tidy. I can do this after this PR is merged.

Copy link
Contributor

@JustinRayAngus JustinRayAngus left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR looks good to me.

@JustinRayAngus JustinRayAngus requested review from WeiqunZhang and removed request for WeiqunZhang October 30, 2024 05:38
@JustinRayAngus JustinRayAngus merged commit c1cd7ab into ECP-WarpX:development Nov 4, 2024
37 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component: implicit solvers Anything related to implicit solvers enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants