You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The feasibility of generic conditioned scenarios must be assessed, because a naive approach would easily give numbers as 46.5 T (Terabytes) of memory used. The problem is that the memory occupation is QUARTIC with the maximum distance.
Suppose maximum_distance=300 km (as it is now) and suppose the site model has a resolution of 1 km (can have even a higher resolution). The number of sites in a disk of radius 300 km is 3.14 * 300^2 = 282_600 sites. The size of the matrices tau and phi is
size = 10 gsims x 4 imts x 282_600^2 sites * 8 bytes * 2 matrices = 46.5 TB
with reasonable numbers for the number of GSIMs (we have more than 10 gsims per TRT in Europe) and IMTs.
This can be solved with region_grid_spacing large enough, but how exactly do we determine it?
The other thing to assess is the dependency on the random part, i.e. do all simulations give basically the same GMF
and we can set number_of_ground_motion_fields=1 or not? Is there a strong dependency on the ses_seed or not?
Can we set truncation_level=0 and number_of_ground_motion_fields=1? In that case, can we parallelize by GSIM?
The text was updated successfully, but these errors were encountered:
The parameter conditioned_gmf_gb in openquake.cfg solves the memory issue: calculations that would generate matrices too big to fit in memory are killed even before starting. The dependency on the seed is ultra-small since the conditioning constraints the GMFs very strongly (if there are enough stations).
The feasibility of generic conditioned scenarios must be assessed, because a naive approach would easily give numbers as 46.5 T (Terabytes) of memory used. The problem is that the memory occupation is QUARTIC with the maximum distance.
Suppose maximum_distance=300 km (as it is now) and suppose the site model has a resolution of 1 km (can have even a higher resolution). The number of sites in a disk of radius 300 km is 3.14 * 300^2 = 282_600 sites. The size of the matrices tau and phi is
with reasonable numbers for the number of GSIMs (we have more than 10 gsims per TRT in Europe) and IMTs.
This can be solved with
region_grid_spacing
large enough, but how exactly do we determine it?The other thing to assess is the dependency on the random part, i.e. do all simulations give basically the same GMF
and we can set
number_of_ground_motion_fields=1
or not? Is there a strong dependency on the ses_seed or not?Can we set
truncation_level=0 and number_of_ground_motion_fields=1
? In that case, can we parallelize by GSIM?The text was updated successfully, but these errors were encountered: