@@ -30,7 +30,7 @@ consistency tests and they verify whether a forecast in consistent with an obser
30
30
that can be used to compare the performance of two (or more) competing forecasts.
31
31
PyCSEP implements the following evaluation routines for grid-based forecasts. These functions are intended to work with
32
32
:class: `GriddedForecasts<csep.core.forecasts.GriddedForecast> ` and :class: `CSEPCatalogs`<csep.core.catalogs.CSEPCatalog> `.
33
- Visit the :ref: `catalogs reference<catalogs-reference> ` and the :ref: `forecasts reference<forecasts -reference> ` to learn
33
+ Visit the :ref: `catalogs reference<catalogs-reference> ` and the :ref: `forecasts reference<forecast -reference> ` to learn
34
34
more about to import your forecasts and catalogs into PyCSEP.
35
35
36
36
.. note ::
@@ -105,6 +105,8 @@ Consistency tests
105
105
magnitude_test
106
106
pseudolikelihood_test
107
107
calibration_test
108
+ resampled_magnitude_test
109
+ MLL_magnitude_test
108
110
109
111
Publication reference
110
112
=====================
@@ -114,13 +116,16 @@ Publication reference
114
116
3. Magnitude test (:ref: `Savran et al., 2020<savran-2020> `)
115
117
4. Pseudolikelihood test (:ref: `Savran et al., 2020<savran-2020> `)
116
118
5. Calibration test (:ref: `Savran et al., 2020<savran-2020> `)
119
+ 6. Resampled Magnitude Test (Serafini et al., in-prep)
120
+ 7. MLL Magnitude Test (Serafini et al., in-prep)
117
121
118
122
****************************
119
123
Preparing evaluation catalog
120
124
****************************
121
125
122
126
The evaluations in PyCSEP do not implicitly filter the observed catalogs or modify the forecast data when called. For most
123
127
cases, the observation catalog should be filtered according to:
128
+
124
129
1. Magnitude range of the forecast
125
130
2. Spatial region of the forecast
126
131
3. Start and end-time of the forecast
0 commit comments