Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Check that all E-field values are finite in Ohm solver #5417

Open
wants to merge 5 commits into
base: development
Choose a base branch
from

Conversation

roelof-groenewald
Copy link
Member

This assert is meant to help identify the origin of a common problem in hybrid-PIC simulations, wherein unresolved Whistler waves cause runaway E-field values. Currently when this happens, WarpX fails in the current deposition routine.

@roelof-groenewald roelof-groenewald added the component: fluid-ohm Related to the Ohm's law solver (with fluid electrons) label Oct 27, 2024
@kli-jfp
Copy link
Contributor

kli-jfp commented Oct 28, 2024

Great idea! If this impacts the code’s speed, perhaps a debug flag should be used for this.

@EZoni
Copy link
Member

EZoni commented Oct 28, 2024

@roelof-groenewald

The Intel oneAPI workflows have been fixed in #5419, please rebase on/merge development to fix those workflows here.

@roelof-groenewald
Copy link
Member Author

Great idea! If this impacts the code’s speed, perhaps a debug flag should be used for this.

Thanks @kli-jfp. Local testing didn't show much of any performance penalty due to the new check since the field solver already includes quite a few reduction operations (so adding one more isn't really noticeable). But if you find in your application it does change the performance we can definitely update it to only check in debug mode.

for (int idim = 0; idim < 3; ++idim) {
WARPX_ALWAYS_ASSERT_WITH_MESSAGE(
Efield_fp[lev][idim]->is_finite(),
"Non-finite value detected in E-field; this indicates more substeps should be used in the field solver."
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could:

  • Set the flag local=true to avoid parallel operations (reduction)
  • Use amrex::Abort so that it does not hand if only one of the MPI ranks fails.

Copy link
Member Author

@roelof-groenewald roelof-groenewald Oct 30, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tested that approach and it still hangs if all processors don't call Abort. So I reverted to using local=false. I would imagine the way that is implemented is to basically call is_finite locally and then reduce the resulting boolean for all ranks, which being 1 bit should be very fast.

@RemiLehe RemiLehe self-assigned this Oct 28, 2024
@roelof-groenewald
Copy link
Member Author

Closing and reopening to trigger Azure tests.

{
for (int idim = 0; idim < 3; ++idim) {
WARPX_ALWAYS_ASSERT_WITH_MESSAGE(
Efield_fp[lev][idim]->is_finite(),
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@WeiqunZhang this is the PR we mentioned in the developer meeting yesterday.
I found that if I use Efield_fp[lev][idim]->is_finite(true) to avoid the MPI reduction the execution hangs (presumably due to only some processes returning false).
I've tested this by modifying the magnetic reconnection test to make it unstable. I attached the input script for the unstable case if you want to play around with it.

warpx_inputs.txt

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component: fluid-ohm Related to the Ohm's law solver (with fluid electrons)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants