You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is a summary of conversation with @OmarDuran, who gets all credits for the observations reported below.
It turns out there is a memory leakage related to the function pp.matrix_operations.invert_diagonal_blocks, when this is called with default parameters (unless the parameter method is not explicitly set to python). The source of the leakage is the numba accelerator, it seems that the jit compiled numba function does not properly clear the memory after execution. The severity of the problem scales with the problem size (the size of the diagonal blocks to be inverted) and with the number of invocations of the inverter.
On the numba GH page, there are a number of open issues related to memory leakage, so it seems this is a recurring problem with numba. While it might be that rearrangements of the python code to be compiled can alleviate the problem, this does not seem like a long term solution.
For the moment, the numba inverter should be avoided when running simulations that calls the inverter many times, that is, when memory leakage becomes a real issue. The only known workaround is to use the python inverter, which does not suffer from a memory leakage, but is slower than numba. Alternative future approaches include new inverters based on cython or petsc and hoping that the problem is fixed in new versions of numba. TBC.
Again, big thanks to @OmarDuran for uncovering and tracking down this problem.
The text was updated successfully, but these errors were encountered:
This is a summary of conversation with @OmarDuran, who gets all credits for the observations reported below.
It turns out there is a memory leakage related to the function
pp.matrix_operations.invert_diagonal_blocks
, when this is called with default parameters (unless the parametermethod
is not explicitly set topython
). The source of the leakage is the numba accelerator, it seems that the jit compiled numba function does not properly clear the memory after execution. The severity of the problem scales with the problem size (the size of the diagonal blocks to be inverted) and with the number of invocations of the inverter.On the numba GH page, there are a number of open issues related to memory leakage, so it seems this is a recurring problem with numba. While it might be that rearrangements of the python code to be compiled can alleviate the problem, this does not seem like a long term solution.
For the moment, the numba inverter should be avoided when running simulations that calls the inverter many times, that is, when memory leakage becomes a real issue. The only known workaround is to use the python inverter, which does not suffer from a memory leakage, but is slower than numba. Alternative future approaches include new inverters based on
cython
orpetsc
and hoping that the problem is fixed in new versions ofnumba
. TBC.Again, big thanks to @OmarDuran for uncovering and tracking down this problem.
The text was updated successfully, but these errors were encountered: