Skip to content

Commit

Permalink
Next release (#585)
Browse files Browse the repository at this point in the history
* call for contributors

* add pub ESDD
  • Loading branch information
aaronspring authored Mar 23, 2021
1 parent cbaec43 commit cf89a8e
Show file tree
Hide file tree
Showing 10 changed files with 201 additions and 178 deletions.
5 changes: 3 additions & 2 deletions CHANGELOG.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ What's New
==========


climpred v2.1.3 (2021-xx-xx)
climpred v2.1.3 (2021-03-23)
============================

Breaking changes
Expand All @@ -20,7 +20,6 @@ New Features
- Added new metric :py:class:`~climpred.metrics._roc` Receiver Operating
Characteristic as ``metric='roc'``. (:pr:`566`) `Aaron Spring`_.


Bug fixes
---------
- :py:meth:`~climpred.classes.HindcastEnsemble.verify` and
Expand All @@ -32,6 +31,8 @@ Bug fixes
raised. Furthermore, ``PredictionEnsemble.map(func, *args, **kwargs)``
applies only function to Datasets with matching dims if ``dim="dim0_or_dim1"`` is
passed as ``**kwargs``. (:issue:`417`, :issue:`437`, :pr:`552`) `Aaron Spring`_.
- :py:class:`~climpred.metrics._rpc` was fixed in ``xskillscore>=0.0.19`` and hence is
not falsely limited to 1 anymore (:issue:`562`, :pr:`566`) `Aaron Spring`_.

Internals/Minor Fixes
---------------------
Expand Down
21 changes: 20 additions & 1 deletion README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -76,6 +76,20 @@ Verification of weather and climate forecasts.
:alt: climpred cloud demo
:target: https://github.com/aaronspring/climpred-cloud-demo

.. note::
We are actively looking for new contributors for climpred! Riley moved to McKinsey's
Climate Analytics team. Aaron is finishing his PhD in Hamberg, Germany, but will stay
in academia.
We especially hope for python enthusiasts from seasonal, subseasonal or weather
prediction community. In our past coding journey, collaborative coding, feedbacking
issues and pull requests advanced our code and thinking about forecast verification
more than we could have ever expected.
`Aaron <https://github.com/aaronspring/>`_ can provide guidance on
implementing new features into climpred. Feel free to implement
your own new feature or take a look at the
`good first issue <https://github.com/pangeo-data/climpred/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22>`_
tag in the issues. Please reach out to us via `gitter <https://gitter.im/climpred>`_.

Installation
============

Expand All @@ -90,7 +104,12 @@ You can install the latest release of ``climpred`` using ``pip`` or ``conda``:
conda install -c conda-forge climpred
You can also install the bleeding edge (pre-release versions) by cloning this
repository and running ``pip install . --upgrade`` in the main directory.
repository and running ``pip install . --upgrade`` in the main directory or

.. code-block:: bash
pip install git+https://github.com/pangeo-data/climpred.git
Documentation
=============
Expand Down
41 changes: 22 additions & 19 deletions climpred/metrics.py
Original file line number Diff line number Diff line change
Expand Up @@ -2554,7 +2554,7 @@ def _discrimination(forecast, verif, dim=None, **metric_kwargs):
* event (event) bool True False
skill <U11 'initialized'
Data variables:
SST (lead, event, forecast_probability) float64 0.1481 ...
SST (lead, event, forecast_probability) float64 0.07407...
Option 2. Pre-process to generate a binary forecast and verification product:
Expand All @@ -2568,7 +2568,7 @@ def _discrimination(forecast, verif, dim=None, **metric_kwargs):
* event (event) bool True False
skill <U11 'initialized'
Data variables:
SST (lead, event, forecast_probability) float64 0.1481 ...
SST (lead, event, forecast_probability) float64 0.07407...
Option 3. Pre-process to generate a probability forecast and binary
verification product. because ``member`` not present in ``hindcast``, use
Expand All @@ -2584,7 +2584,7 @@ def _discrimination(forecast, verif, dim=None, **metric_kwargs):
* event (event) bool True False
skill <U11 'initialized'
Data variables:
SST (lead, event, forecast_probability) float64 0.1481 ...
SST (lead, event, forecast_probability) float64 0.07407...
"""
forecast, verif, metric_kwargs, dim = _extract_and_apply_logical(
Expand Down Expand Up @@ -2659,10 +2659,10 @@ def _reliability(forecast, verif, dim=None, **metric_kwargs):
Coordinates:
* lead (lead) int32 1 2 3 4 5 6 7 8 9 10
* forecast_probability (forecast_probability) float64 0.1 0.3 0.5 0.7 0.9
SST_samples (forecast_probability) float64 25.0 3.0 0.0 3.0 21.0
SST_samples (forecast_probability) float64 22.0 5.0 1.0 3.0 21.0
skill <U11 'initialized'
Data variables:
SST (lead, forecast_probability) float64 0.16 ... 1.0
SST (lead, forecast_probability) float64 0.09091 ... 1.0
Option 2. Pre-process to generate a binary forecast and verification product:
Expand All @@ -2673,10 +2673,10 @@ def _reliability(forecast, verif, dim=None, **metric_kwargs):
Coordinates:
* lead (lead) int32 1 2 3 4 5 6 7 8 9 10
* forecast_probability (forecast_probability) float64 0.1 0.3 0.5 0.7 0.9
SST_samples (forecast_probability) float64 25.0 3.0 0.0 3.0 21.0
SST_samples (forecast_probability) float64 22.0 5.0 1.0 3.0 21.0
skill <U11 'initialized'
Data variables:
SST (lead, forecast_probability) float64 0.16 ... 1.0
SST (lead, forecast_probability) float64 0.09091 ... 1.0
Option 3. Pre-process to generate a probability forecast and binary
verification product. because ``member`` not present in ``hindcast``, use
Expand All @@ -2689,10 +2689,10 @@ def _reliability(forecast, verif, dim=None, **metric_kwargs):
Coordinates:
* lead (lead) int32 1 2 3 4 5 6 7 8 9 10
* forecast_probability (forecast_probability) float64 0.1 0.3 0.5 0.7 0.9
SST_samples (forecast_probability) float64 25.0 3.0 0.0 3.0 21.0
SST_samples (forecast_probability) float64 22.0 5.0 1.0 3.0 21.0
skill <U11 'initialized'
Data variables:
SST (lead, forecast_probability) float64 0.16 ... 1.0
SST (lead, forecast_probability) float64 0.09091 ... 1.0
"""
if "logical" in metric_kwargs:
Expand Down Expand Up @@ -2805,27 +2805,30 @@ def _rps(forecast, verif, dim=None, **metric_kwargs):
Example:
>>> category_edges = np.array([-.5, 0., .5, 1.])
>>> HindcastEnsemble.verify(metric='rps', comparison='m2o', dim='member',
>>> HindcastEnsemble.verify(metric='rps', comparison='m2o', dim=['member', 'init'],
... alignment='same_verifs', category_edges=category_edges)
<xarray.Dataset>
Dimensions: (init: 52, lead: 10)
Dimensions: (lead: 10)
Coordinates:
* lead (lead) int32 1 2 3 4 5 6 7 8 9 10
* init (init) object 1964-01-01 00:00:00 ... 2015-01-01 00:00:00
skill <U11 'initialized'
* lead (lead) int32 1 2 3 4 5 6 7 8 9 10
observations_category_edge <U67 '[-np.inf, -0.5), [-0.5, 0.0), [0.0, 0.5...
forecasts_category_edge <U67 '[-np.inf, -0.5), [-0.5, 0.0), [0.0, 0.5...
skill <U11 'initialized'
Data variables:
SST (lead, init) float64 0.2696 0.2696 0.2696 ... 0.2311 0.2311 0.2311
SST (lead) float64 0.115 0.1123 ... 0.1687 0.1875
>>> category_edges = np.array([9.5, 10., 10.5, 11.])
>>> PerfectModelEnsemble.verify(metric='rps', comparison='m2c',
... dim=['member','init'], category_edges=category_edges)
<xarray.Dataset>
Dimensions: (lead: 20)
Dimensions: (lead: 20)
Coordinates:
* lead (lead) int64 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
* lead (lead) int64 1 2 3 4 5 6 7 ... 15 16 17 18 19 20
observations_category_edge <U71 '[-np.inf, 9.5), [9.5, 10.0), [10.0, 10....
forecasts_category_edge <U71 '[-np.inf, 9.5), [9.5, 10.0), [10.0, 10....
Data variables:
tos (lead) float64 0.1512 0.2726 0.1259 0.214 ... 0.2085 0.1427 0.2757
tos (lead) float64 0.08951 0.1615 ... 0.1399 0.2274
"""
dim = _remove_member_from_dim_or_raise(dim)
if "category_edges" in metric_kwargs:
Expand Down
1 change: 0 additions & 1 deletion climpred/tests/test_PerfectModelEnsemble_class.py
Original file line number Diff line number Diff line change
Expand Up @@ -368,7 +368,6 @@ def test_PerfectModel_verify_bootstrap_deterministic(
if dim == "member" and metric in pearson_r_containing_metrics:
dim = ["init", "member"]

# verify()
actual = pm.verify(
comparison=comparison,
metric=metric,
Expand Down
53 changes: 40 additions & 13 deletions climpred/tests/test_metrics_perfect.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,11 +21,24 @@

xr.set_options(display_style="text")

pearson_r_containing_metrics = [
"pearson_r",
"spearman_r",
"pearson_r_p_value",
"spearman_r_p_value",
"msess_murphy",
"bias_slope",
"conditional_bias",
"std_ratio",
"conditional_bias",
"uacc",
]


@pytest.mark.parametrize("how", ["constant", "increasing_by_lead"])
@pytest.mark.parametrize("comparison", PM_COMPARISONS)
@pytest.mark.parametrize("metric", PM_METRICS)
def test_PerfectModelEnsemble_constant_forecasts(
def test_PerfectModelEnsemble_perfect_forecasts(
perfectModelEnsemble_initialized_control, metric, comparison, how
):
"""Test that PerfectModelEnsemble.verify() returns a perfect score for a perfectly
Expand Down Expand Up @@ -73,24 +86,30 @@ def f(x):
comparison = "m2c"
skill = pe.verify(
metric=metric, comparison=comparison, dim=dim, **metric_kwargs
)
).tos
else:
dim = "init" if comparison == "e2c" else ["init", "member"]
skill = pe.verify(
metric=metric, comparison=comparison, dim=dim, **metric_kwargs
)
# # TODO: test assert skill.variable == perfect).all()
if metric == "contingency":
).tos

if metric == "contingency" and how == "constant":
assert (skill == 1).all() # checks Contingency.accuracy
elif metric in ["crpss", "msess"]: # identical forecast lead to nans
pass
elif Metric.perfect and metric not in pearson_r_containing_metrics:
assert (skill == Metric.perfect).all(), print(
f"{metric} perfect", Metric.perfect, "found", skill
)
else:
assert skill == Metric.perfect
pass


@pytest.mark.parametrize("alignment", ["same_inits", "same_verif", "maximize"])
@pytest.mark.parametrize("how", ["constant", "increasing_by_lead"])
@pytest.mark.parametrize("comparison", HINDCAST_COMPARISONS)
@pytest.mark.parametrize("metric", HINDCAST_METRICS)
def test_HindcastEnsemble_constant_forecasts(
def test_HindcastEnsemble_perfect_forecasts(
hindcast_hist_obs_1d, metric, comparison, how, alignment
):
"""Test that HindcastEnsemble.verify() returns a perfect score for a perfectly
Expand Down Expand Up @@ -152,18 +171,26 @@ def f(x):
if metric in probabilistic_metrics_requiring_more_than_member_dim
else "member",
alignment=alignment,
**metric_kwargs
)
**metric_kwargs,
).SST
else:
dim = "member" if comparison == "m2o" else "init"
skill = he.verify(
metric=metric,
comparison=comparison,
dim=dim,
alignment=alignment,
**metric_kwargs
**metric_kwargs,
).SST
if metric == "contingency" and how == "constant":
assert (skill.mean() == 1).all(), print(
f"{metric} found", skill
) # checks Contingency.accuracy
elif metric in ["msess", "crpss"]:
pass # identical forecasts produce NaNs
elif Metric.perfect and metric not in pearson_r_containing_metrics:
assert (skill == Metric.perfect).all(), print(
f"{metric} perfect", Metric.perfect, "found", skill
)
if metric == "contingency":
assert (skill == 1).all() # checks Contingency.accuracy
else:
assert skill == Metric.perfect
pass
17 changes: 9 additions & 8 deletions docs/source/contributing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,6 @@ If you need to add new functions to the API, run ``sphinx-autogen -o api api.rst
Preparing Pull Requests
-----------------------


#. Fork the
`climpred GitHub repository <https://github.com/pangeo-data/climpred>`__. It's
fine to use ``climpred`` as your fork repository name because it will live
Expand Down Expand Up @@ -136,7 +135,8 @@ Preparing Pull Requests
$ pip install --user pre-commit
$ pre-commit install

Afterwards ``pre-commit`` will run whenever you commit.
pre-commit automatically beautifies the code, makes it more maintainable and catches syntax errors.
Afterwards ``pre-commit`` will run whenever you commit.

https://pre-commit.com/ is a framework for managing and maintaining multi-language pre-commit
hooks to ensure code-style and code formatting is consistent.
Expand All @@ -145,9 +145,10 @@ Preparing Pull Requests
You’ll need to make sure to activate that environment next time you want
to use it after closing the terminal or your system.

You can now edit your local working copy and run/add tests as necessary. Please follow
PEP-8 for naming. When committing, ``pre-commit`` will modify the files as needed, or
will generally be quite clear about what you need to do to pass the commit test.
You can now edit your local working copy and run/add tests as necessary. Please try
to follow PEP-8 for naming. When committing, ``pre-commit`` will modify the files as
needed, or will generally be quite clear about what you need to do to pass the
commit test.

#. Break your edits up into reasonably sized commits::

Expand Down Expand Up @@ -176,9 +177,9 @@ Preparing Pull Requests

#. Running the performance test suite

Performance matters and it is worth considering whether your code has introduced
performance regressions. `climpred` is starting to write a suite of benchmarking tests
using `asv <https://asv.readthedocs.io/en/stable/>`_
If you considerabling changed to core of code of climpred, it is worth considering
whether your code has introduced performance regressions. `climpred` has a suite of
benchmarking tests using `asv <https://asv.readthedocs.io/en/stable/>`_
to enable easy monitoring of the performance of critical `climpred` operations.
These benchmarks are all found in the ``asv_bench`` directory.

Expand Down
17 changes: 17 additions & 0 deletions docs/source/contributors.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,21 @@
Contributors
************

.. note::
We are actively looking for new contributors for climpred! Riley moved to McKinsey's
Climate Analytics team. Aaron is finishing his PhD in Hamberg, Germany, but will stay
in academia.
We especially hope for python enthusiasts from seasonal, subseasonal or weather
prediction community. In our past coding journey, collaborative coding, feedbacking
issues and pull requests advanced our code and thinking about forecast verification
more than we could have ever expected.
`Aaron <https://github.com/aaronspring/>`_ can provide guidance on
implementing new features into climpred. Feel free to implement
your own new feature or take a look at the
`good first issue <https://github.com/pangeo-data/climpred/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22>`_
tag in the issues. Please reach out to us via `gitter <https://gitter.im/climpred>`_.


Core Developers
===============
* Riley X. Brady (`github <https://github.com/bradyrx/>`__)
Expand All @@ -11,6 +26,8 @@ Contributors
============
* Andrew Huang (`github <https://github.com/ahuang11/>`__)
* Kathy Pegion (`github <https://github.com/kpegion/>`__)
* Anderson Banihirwe (`github <https://github.com/andersy005/>`__)
* Ray Bell (`github <https://github.com/raybellwaves/>`__)

For a list of all the contributions, see the github
`contribution graph <https://github.com/pangeo-data/climpred/graphs/contributors>`_.
157 changes: 48 additions & 109 deletions docs/source/examples/decadal/Significance.ipynb

Large diffs are not rendered by default.

Loading

0 comments on commit cf89a8e

Please sign in to comment.