Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deprecate PointDataOutput? #416

Open
JDBetteridge opened this issue Aug 2, 2023 · 3 comments
Open

Deprecate PointDataOutput? #416

JDBetteridge opened this issue Aug 2, 2023 · 3 comments
Labels
parallel Pull requests or issues relating to parallel capability question Issues that involve a question that needs answering to bear in mind These are issues that are things that we need to bear in mind -- these issues won't ever be closed

Comments

@JDBetteridge
Copy link
Member

The non-hydrostatic Skamarock Klemp example does not run in parallel, is this expected?

Full traceback below:

$ mpiexec -n 2 python examples/compressible/skamarock_klemp_nonhydrostatic.py --running-tests
firedrake:WARNING OMP_NUM_THREADS is not set or is set to a value greater than 1, we suggest setting OMP_NUM_THREADS=1 to improve performance
gusto:INFO Physical parameters that take non-default values:
gusto:INFO 
Traceback (most recent call last):
  File "/scratch/jbetteri/firedrake_py311_opt/src/gusto/examples/compressible/skamarock_klemp_nonhydrostatic.py", line 125, in <module>
Traceback (most recent call last):
  File "/scratch/jbetteri/firedrake_py311_opt/src/gusto/examples/compressible/skamarock_klemp_nonhydrostatic.py", line 125, in <module>
    stepper.run(t=0, tmax=tmax)
    stepper.run(t=0, tmax=tmax)
  File "/scratch/jbetteri/firedrake_py311_opt/src/gusto/gusto/timeloop.py", line 582, in run
  File "/scratch/jbetteri/firedrake_py311_opt/src/gusto/gusto/timeloop.py", line 582, in run
    super().run(t, tmax, pick_up=pick_up)
    super().run(t, tmax, pick_up=pick_up)
  File "/scratch/jbetteri/firedrake_py311_opt/src/gusto/gusto/timeloop.py", line 162, in run
  File "/scratch/jbetteri/firedrake_py311_opt/src/gusto/gusto/timeloop.py", line 162, in run
    self.io.setup_dump(self.fields, t, pick_up)
    self.io.setup_dump(self.fields, t, pick_up)
  File "/scratch/jbetteri/firedrake_py311_opt/src/gusto/gusto/io.py", line 412, in setup_dump
  File "/scratch/jbetteri/firedrake_py311_opt/src/gusto/gusto/io.py", line 412, in setup_dump
    self.dump(state_fields, t)
  File "/scratch/jbetteri/firedrake_py311_opt/src/gusto/gusto/io.py", line 549, in dump
    self.dump(state_fields, t)
  File "/scratch/jbetteri/firedrake_py311_opt/src/gusto/gusto/io.py", line 549, in dump
    self.pointdata_output.dump(state_fields, t)
  File "/scratch/jbetteri/firedrake_py311_opt/src/gusto/gusto/io.py", line 115, in dump
    self.pointdata_output.dump(state_fields, t)
  File "/scratch/jbetteri/firedrake_py311_opt/src/gusto/gusto/io.py", line 115, in dump
    val_list.append((field_name, np.asarray(field_creator(field_name).at(points, tolerance=self.tolerance))))
                                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "petsc4py/PETSc/Log.pyx", line 115, in petsc4py.PETSc.Log.EventDecorator.decorator.wrapped_func
    val_list.append((field_name, np.asarray(field_creator(field_name).at(points, tolerance=self.tolerance))))
                                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "petsc4py/PETSc/Log.pyx", line 115, in petsc4py.PETSc.Log.EventDecorator.decorator.wrapped_func
  File "petsc4py/PETSc/Log.pyx", line 116, in petsc4py.PETSc.Log.EventDecorator.decorator.wrapped_func
  File "/scratch/jbetteri/firedrake_py311_opt/src/firedrake/firedrake/function.py", line 622, in at
  File "petsc4py/PETSc/Log.pyx", line 116, in petsc4py.PETSc.Log.EventDecorator.decorator.wrapped_func
  File "/scratch/jbetteri/firedrake_py311_opt/src/firedrake/firedrake/function.py", line 622, in at
    raise RuntimeError("Point evaluation gave different results across processes.")
RuntimeError: Point evaluation gave different results across processes.
Abort(1) on node 0 (rank 0 in comm 496): application called MPI_Abort(PYOP2_COMM_WORLD, 1) - process 0
    raise RuntimeError("Point evaluation gave different results across processes.")
RuntimeError: Point evaluation gave different results across processes.
Abort(1) on node 1 (rank 1 in comm 496): application called MPI_Abort(PYOP2_COMM_WORLD, 1) - process 1
@tommbendall
Copy link
Contributor

tommbendall commented Aug 2, 2023

What is going on here is that this example uses the "point-data" output capability, which does not work in parallel.

This "point-data" outputting is now quite old, and uses the .at routine to sample the finite element fields at particular points. Our newer netcdf outputting gives us a similar capability and does work in parallel**, so I don't know if we still need the "point-data" outputting any more ...

The short term fix for this example is just to remove the "point-data" outputting for this example, since we don't actually use the output, it's just in this example to ensure that we still test this capability. The longer term fix is either to remove "point-data" outputting or to test it properly (and fail gracefully if we try to use it in parallel).

I suggest we leave this issue open for Jemma to comment on before making a longer-term decision, but I would be happy for you to remove the "point-data" output from that example for now.

**other than bugs with it that Jack steadily finds xD

@JDBetteridge
Copy link
Member Author

JDBetteridge commented Aug 2, 2023

PointDataOutput now raises an exception if you try to construct it in parallel:

if self.comm.size > 1:

Non-hydrostatic Compressible Skamarock Klemp will print a warning if you run in parallel:

Hopefully all the tests pass on the new PR 🤞

I will change the title of this issue to reflect why it's being kept open.

@JDBetteridge JDBetteridge changed the title Cannot run Non-hydrostatic Skamarock Klemp in parallel Deprecate PointDataOutput? Aug 2, 2023
@JDBetteridge JDBetteridge added question Issues that involve a question that needs answering to bear in mind These are issues that are things that we need to bear in mind -- these issues won't ever be closed parallel Pull requests or issues relating to parallel capability labels Aug 2, 2023
@colinjcotter
Copy link
Contributor

colinjcotter commented Aug 2, 2023 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
parallel Pull requests or issues relating to parallel capability question Issues that involve a question that needs answering to bear in mind These are issues that are things that we need to bear in mind -- these issues won't ever be closed
Projects
None yet
Development

No branches or pull requests

3 participants