Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add ability to derive variables and add selected derived forcings #34

Open
wants to merge 70 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 50 commits
Commits
Show all changes
70 commits
Select commit Hold shift + click to select a range
981d676
First attempt at adding derived forcings
ealerskans Oct 28, 2024
79a94db
Re-structure approach
ealerskans Nov 6, 2024
f37161c
Add derivation of cyclic encoded hour of day and day of year
ealerskans Nov 6, 2024
71afd3a
Add derivation of cyclic encoded time of year
ealerskans Nov 6, 2024
abb626b
Update and add docstrings
ealerskans Nov 6, 2024
8b1f18e
Remove time_of_year
ealerskans Nov 12, 2024
7854013
Provide the full namespace of the function
ealerskans Nov 12, 2024
7fa90bf
Rename the module with derived variables
ealerskans Nov 12, 2024
48c9e3e
Rename the function used for deriving variables
ealerskans Nov 12, 2024
8de9404
Redefine the config file for derived variables and how they are calcu…
ealerskans Nov 15, 2024
ffc030c
Remove derived variables from 'load_and_subset_dataset'
ealerskans Nov 15, 2024
692cdd3
Add try/except for derived variables when loading the dataset
ealerskans Nov 15, 2024
c0cd875
Chunk the input data with the defined output chunks
ealerskans Dec 5, 2024
55224f3
Update toa_radiation function name
ealerskans Dec 5, 2024
678ea52
Correct kwargs usage, add back dropped coordinates and return correct…
ealerskans Dec 5, 2024
9d2db07
Prepare for hour_of_day and day_of_year
ealerskans Dec 5, 2024
26455bc
Add optional 'attributes' to the config of 'derived_variables' and ch…
ealerskans Dec 6, 2024
fbb6065
Add dummy function for getting lat,lon (preparation for #33)
ealerskans Dec 9, 2024
3a12f48
Add function for chunking data and checking the chunk size
ealerskans Dec 9, 2024
3ace219
Add back coordinates on the subset instead of for each derived variab…
ealerskans Dec 9, 2024
a6b61b0
Add 'hour_of_day' to example config
ealerskans Dec 9, 2024
1814297
Merge branch 'main' into feature/derive_forcings
ealerskans Dec 9, 2024
9dcace6
Rename derived variables dataset section in the example config
ealerskans Dec 9, 2024
aba6757
Remove f-string from 'name_format'
ealerskans Dec 10, 2024
143edb6
Update README
ealerskans Dec 10, 2024
6aad6d7
Merge branch 'main' into feature/derive_forcings
ealerskans Dec 11, 2024
12e0575
Update CHANGELOG
ealerskans Dec 11, 2024
000ce92
Make functions for deriving toa_radiation and datetime forcings actua…
ealerskans Dec 11, 2024
0af6319
Update docstring and variable names in 'cyclic_encoding'
ealerskans Dec 11, 2024
284db91
Add ranges to lat and lon in docstring
ealerskans Dec 12, 2024
ba161d2
Add github username to CHANGELOG entry
ealerskans Dec 12, 2024
e3d590c
Update DerivedVariable attributes to be Dict[str, str]
ealerskans Dec 12, 2024
f8cae4f
Add missing attribute to docstring
ealerskans Dec 12, 2024
8470c82
Change var names in 'calculate_toa_radiation'
ealerskans Dec 12, 2024
69afdd3
Remove unnecessary 'or None'
ealerskans Dec 12, 2024
e17ed8b
Use var name 'dim' instead of 'd'
ealerskans Dec 12, 2024
23b119f
Use var names 'key, val' instead of 'k, v'
ealerskans Dec 12, 2024
2ce53c7
Move '_check_dataset_attributes' outside if statement
ealerskans Dec 12, 2024
f1e3d77
Set '{}' as default for 'attributes' and 'chunking'
ealerskans Dec 12, 2024
2afbb35
Make types more explicit
ealerskans Dec 13, 2024
75797a2
Rename 'ds_subset' to 'ds_derived_vars' and update comment for 'ds_in…
ealerskans Dec 13, 2024
31578e8
Add 'Optional[...]' to optional attributes
ealerskans Dec 13, 2024
90e4cf2
Move loading of dataset to a separate function
ealerskans Dec 13, 2024
717c6a5
Simplify if loops
ealerskans Dec 13, 2024
2856c6b
Update '_get_derived_variable_function'
ealerskans Dec 13, 2024
98673ee
Simplify checks of the derived fields
ealerskans Dec 13, 2024
8940e82
Issue warning saying that we assume coordinates are named 'lat' and '…
ealerskans Dec 13, 2024
e12e328
Update README to make it clear that 'attributes' is associated with '…
ealerskans Dec 13, 2024
ecdea30
Indicate that 'variables' and 'derived_variables' are mutually exclusive
ealerskans Dec 13, 2024
e3c0f22
Update docstring of 'InputDataset' class
ealerskans Dec 13, 2024
e907a6d
Correct types in '_check_attributes' docstring
ealerskans Dec 13, 2024
bb9be13
Use 'rpartition' to get 'module_name' and 'function_name'
ealerskans Dec 13, 2024
49de0b3
Add some initial tests for 'derived_variables'
ealerskans Dec 13, 2024
b268f01
Update docstrings and rename 'DerivedVariable.attributes' to 'Derived…
ealerskans Dec 17, 2024
dbd5bfd
Do not add 'attributes' to docstring
ealerskans Dec 17, 2024
474a83d
Remove unnecessary exception handling
ealerskans Dec 17, 2024
1da66e2
Move 'subset_dataset' to 'ops.subsetting'
ealerskans Dec 17, 2024
dc7dc5e
Move 'derived_variables' to 'ops'
ealerskans Dec 17, 2024
c9e96af
Move chunk size check to 'chunking' module
ealerskans Dec 17, 2024
47b8411
Add module docstring
ealerskans Dec 17, 2024
5ae772f
Update tests
ealerskans Dec 17, 2024
2c0bdf8
Add global REQUIRED_FIELD_ATTRIBUTES var and updated check for requir…
ealerskans Dec 18, 2024
f1ce6d1
Update long name for toa_radiation
ealerskans Dec 18, 2024
58d8af6
Update README
ealerskans Dec 18, 2024
f87b954
Return dropped coordinates to the data-arrays instead
ealerskans Dec 19, 2024
80cf058
Adds dims to the dataset to make it work with derived variables that …
ealerskans Dec 19, 2024
da0c171
Add ability to have 'variables' and 'derived_variables' in the same
ealerskans Dec 19, 2024
f61a3b6
Update README
ealerskans Dec 19, 2024
554f869
Add 'load_config' function, which wraps 'from_yaml_file' and checks t…
ealerskans Dec 20, 2024
085aae3
Update README
ealerskans Dec 20, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

### Added

- add ability to derive variables from input datasets [\#34](https://github.com/mllam/mllam-data-prep/pull/34), @ealerskans
- add github PR template to guide development process on github [\#44](https://github.com/mllam/mllam-data-prep/pull/44), @leifdenby

## [v0.5.0](https://github.com/mllam/mllam-data-prep/releases/tag/v0.5.0)
Expand Down
67 changes: 61 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -187,6 +187,32 @@ inputs:
name_format: "{var_name}"
target_output_variable: forcing

danra_derived_forcings:
ealerskans marked this conversation as resolved.
Show resolved Hide resolved
path: https://mllam-test-data.s3.eu-north-1.amazonaws.com/single_levels.zarr
dims: [time, x, y]
derived_variables:
toa_radiation:
kwargs:
time: time
lat: lat
lon: lon
function: mllam_data_prep.derived_variables.calculate_toa_radiation
hour_of_day:
kwargs:
time: time
function: mllam_data_prep.derived_variables.calculate_hour_of_day
dim_mapping:
time:
method: rename
dim: time
grid_index:
method: stack
dims: [x, y]
forcing_feature:
method: stack_variables_by_var_name
name_format: "{var_name}"
target_output_variable: forcing

danra_lsm:
path: https://mllam-test-data.s3.eu-north-1.amazonaws.com/lsm.zarr
dims: [x, y]
Expand Down Expand Up @@ -286,15 +312,40 @@ inputs:
grid_index:
method: stack
dims: [x, y]
target_architecture_variable: state
target_output_variable: state

danra_surface:
path: https://mllam-test-data.s3.eu-north-1.amazonaws.com/single_levels.zarr
dims: [time, x, y]
variables:
# shouldn't really be using sea-surface pressure as "forcing", but don't
# have radiation varibles in danra yet
- pres_seasurface
# use surface incoming shortwave radiation as forcing
- swavr0m
dim_mapping:
time:
method: rename
dim: time
grid_index:
method: stack
dims: [x, y]
forcing_feature:
method: stack_variables_by_var_name
name_format: "{var_name}"
target_output_variable: forcing

danra_derived_forcings:
path: https://mllam-test-data.s3.eu-north-1.amazonaws.com/single_levels.zarr
dims: [time, x, y]
derived_variables:
toa_radiation:
kwargs:
time: time
lat: lat
lon: lon
function: mllam_data_prep.derived_variables.calculate_toa_radiation
hour_of_day:
kwargs:
time: time
function: mllam_data_prep.derived_variables.calculate_hour_of_day
dim_mapping:
time:
method: rename
Expand All @@ -305,7 +356,7 @@ inputs:
forcing_feature:
method: stack_variables_by_var_name
name_format: "{var_name}"
target_architecture_variable: forcing
target_output_variable: forcing

...
```
Expand All @@ -315,11 +366,15 @@ The `inputs` section defines the source datasets to extract data from. Each sour
- `path`: the path to the source dataset. This can be a local path or a URL to e.g. a zarr dataset or netCDF file, anything that can be read by `xarray.open_dataset(...)`.
- `dims`: the dimensions that the source dataset is expected to have. This is used to check that the source dataset has the expected dimensions and also makes it clearer in the config file what the dimensions of the source dataset are.
- `variables`: selects which variables to extract from the source dataset. This may either be a list of variable names, or a dictionary where each key is the variable name and the value defines a dictionary of coordinates to do selection on. When doing selection you may also optionally define the units of the variable to check that the units of the variable match the units of the variable in the model architecture.
- `target_architecture_variable`: the variable in the model architecture that the source dataset should be mapped to.
- `target_output_variable`: the variable in the model architecture that the source dataset should be mapped to.
- `dim_mapping`: defines how the dimensions of the source dataset should be mapped to the dimensions of the model architecture. This is done by defining a method to apply to each dimension. The methods are:
- `rename`: simply rename the dimension to the new name
- `stack`: stack the listed dimension to create the dimension in the output
- `stack_variables_by_var_name`: stack the dimension into the new dimension, and also stack the variable name into the new variable name. This is useful when you have multiple variables with the same dimensions that you want to stack into a single variable.
- `derived_variables`: defines the variables to be derived from the variables available in the source dataset. This should be a dictionary where each key is the variable to be derived and the value defines a dictionary with the following additional information.
- `function`: the function to be used to derive a variable. This should be a string and may either be the full namespace of the function (e.g. `mllam_data_prep.derived_variables.calculate_toa_radiation`) or in case the function is included in the `mllam_data_prep.derived_variables` module it is enough with the function name only.
- `kwargs`: arguments for the function used to derive a variable. This is a dictionary where each key is the variable name to select from the source dataset and each value is the named argument to `function`.
- `attributes`: section where users can specify attributes (e.g. `units` and `long_name`) as a dictionary (not included in the example config file), where the keys are the attribute names and the values are strings. If using a function defined in `mllam_data_prep.derived_variables` this section is optional as the attributes should already be defined. In this case, adding the attributes to the config file will overwrite the already-defined ones. If using an external function, where the attributes `units` and `long_name` are not set, this section is a requirement.


### Config schema versioning
Expand Down
26 changes: 26 additions & 0 deletions example.danra.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -73,6 +73,32 @@ inputs:
name_format: "{var_name}"
target_output_variable: forcing

danra_derived_forcings:
path: https://mllam-test-data.s3.eu-north-1.amazonaws.com/single_levels.zarr
dims: [time, x, y]
derived_variables:
toa_radiation:
kwargs:
time: time
lat: lat
lon: lon
function: mllam_data_prep.derived_variables.calculate_toa_radiation
hour_of_day:
kwargs:
time: time
function: mllam_data_prep.derived_variables.calculate_hour_of_day
dim_mapping:
time:
method: rename
ealerskans marked this conversation as resolved.
Show resolved Hide resolved
dim: time
grid_index:
method: stack
dims: [x, y]
forcing_feature:
method: stack_variables_by_var_name
name_format: "{var_name}"
target_output_variable: forcing

danra_lsm:
path: https://mllam-test-data.s3.eu-north-1.amazonaws.com/lsm.zarr
dims: [x, y]
Expand Down
46 changes: 37 additions & 9 deletions mllam_data_prep/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -52,6 +52,24 @@ class ValueSelection:
units: str = None


@dataclass
class DerivedVariable:
"""
Defines a derived variables, where the kwargs (variables required
for the calculation) and the function (for calculating the variable)
are specified.

Attributes:
kwargs: Variables required for calculating the derived variable.
function: Function used to calculate the derived variable.
ealerskans marked this conversation as resolved.
Show resolved Hide resolved
attributes: Attributes (e.g. `units` and `long_name`) for the derived variable.
ealerskans marked this conversation as resolved.
Show resolved Hide resolved
"""

kwargs: Dict[str, str]
function: str
attributes: Optional[Dict[str, str]] = field(default_factory=dict)


@dataclass
class DimMapping:
"""
Expand Down Expand Up @@ -120,7 +138,8 @@ class InputDataset:
1) the path to the dataset,
2) the expected dimensions of the dataset,
3) the variables to select from the dataset (and optionally subsection
along the coordinates for each variable) and finally
along the coordinates for each variable) or the variables to derive
from the dataset, and finally
4) the method by which the dimensions and variables of the dataset are
mapped to one of the output variables (this includes stacking of all
the selected variables into a new single variable along a new coordinate,
Expand All @@ -134,11 +153,6 @@ class InputDataset:
dims: List[str]
List of the expected dimensions of the dataset. E.g. `["time", "x", "y"]`.
These will be checked to ensure consistency of the dataset being read.
variables: Union[List[str], Dict[str, Dict[str, ValueSelection]]]
List of the variables to select from the dataset. E.g. `["temperature", "precipitation"]`
or a dictionary where the keys are the variable names and the values are dictionaries
defining the selection for each variable. E.g. `{"temperature": levels: {"values": [1000, 950, 900]}}`
would select the "temperature" variable and only the levels 1000, 950, and 900.
dim_mapping: Dict[str, DimMapping]
Mapping of the variables and dimensions in the input dataset to the dimensions of the
output variable (`target_output_variable`). The key is the name of the output dimension to map to
Expand All @@ -151,14 +165,28 @@ class InputDataset:
(e.g. two datasets that coincide in space and time will only differ in the feature dimension,
so the two will be combined by concatenating along the feature dimension).
If a single shared coordinate cannot be found then an exception will be raised.
variables: Union[List[str], Dict[str, Dict[str, ValueSelection]]]
List of the variables to select from the dataset. E.g. `["temperature", "precipitation"]`
or a dictionary where the keys are the variable names and the values are dictionaries
defining the selection for each variable. E.g. `{"temperature": levels: {"values": [1000, 950, 900]}}`
would select the "temperature" variable and only the levels 1000, 950, and 900.
derived_variables: Dict[str, DerivedVariable]
Dictionary of variables to derive from the dataset, where the keys are the variable names and
ealerskans marked this conversation as resolved.
Show resolved Hide resolved
the values are dictionaries defining the necessary function and kwargs. E.g.
`{"toa_radiation": {"kwargs": {"time": "time", "lat": "lat", "lon": "lon"}, "function": "calculate_toa_radiation"}}`
would derive the "toa_radiation" variable using the `calculate_toa_radiation` function, which
takes `time`, `lat` and `lon` as arguments.
ealerskans marked this conversation as resolved.
Show resolved Hide resolved
attributes: Dict[str, Any]
Optional dictionary with dataset attributes.
"""

path: str
dims: List[str]
variables: Union[List[str], Dict[str, Dict[str, ValueSelection]]]
dim_mapping: Dict[str, DimMapping]
target_output_variable: str
attributes: Dict[str, Any] = None
variables: Optional[Union[List[str], Dict[str, Dict[str, ValueSelection]]]] = None
derived_variables: Optional[Dict[str, DerivedVariable]] = None
attributes: Optional[Dict[str, Any]] = field(default_factory=dict)


@dataclass
Expand Down Expand Up @@ -258,7 +286,7 @@ class Output:

variables: Dict[str, List[str]]
coord_ranges: Dict[str, Range] = None
chunking: Dict[str, int] = None
chunking: Dict[str, int] = field(default_factory=dict)
splitting: Splitting = None


Expand Down
45 changes: 38 additions & 7 deletions mllam_data_prep/create_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,8 @@

from . import __version__
from .config import Config, InvalidConfigException
from .ops.loading import load_and_subset_dataset
from .derived_variables import derive_variables
from .ops.loading import load_dataset, subset_dataset
from .ops.mapping import map_dims_and_variables
from .ops.selection import select_by_kwargs
from .ops.statistics import calc_stats
Expand All @@ -30,11 +31,14 @@ def _check_dataset_attributes(ds, expected_attributes, dataset_name):

# check for attributes having the wrong value
incorrect_attributes = {
k: v for k, v in expected_attributes.items() if ds.attrs[k] != v
key: val for key, val in expected_attributes.items() if ds.attrs[key] != val
}
if len(incorrect_attributes) > 0:
s_list = "\n".join(
[f"{k}: {v} != {ds.attrs[k]}" for k, v in incorrect_attributes.items()]
[
f"{key}: {val} != {ds.attrs[key]}"
for key, val in incorrect_attributes.items()
]
)
raise ValueError(
f"Dataset {dataset_name} has the following incorrect attributes: {s_list}"
Expand Down Expand Up @@ -120,23 +124,51 @@ def create_dataset(config: Config):

output_config = config.output
output_coord_ranges = output_config.coord_ranges
chunking_config = config.output.chunking

dataarrays_by_target = defaultdict(list)

for dataset_name, input_config in config.inputs.items():
path = input_config.path
variables = input_config.variables
derived_variables = input_config.derived_variables
target_output_var = input_config.target_output_variable
expected_input_attributes = input_config.attributes or {}
expected_input_attributes = input_config.attributes
expected_input_var_dims = input_config.dims

output_dims = output_config.variables[target_output_var]

logger.info(f"Loading dataset {dataset_name} from {path}")
try:
ds = load_and_subset_dataset(fp=path, variables=variables)
ds_source = load_dataset(fp=path)
except Exception as ex:
raise Exception(f"Error loading dataset {dataset_name} from {path}") from ex

if variables:
logger.info(f"Subsetting dataset {dataset_name}")
try:
ds = subset_dataset(
ds=ds_source, variables=variables, chunking=chunking_config
)
except Exception as ex:
Copy link

@mafdmi mafdmi Dec 11, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know, that this is not introduced by this feature, but I'm not particularly fan of this very general exception, but maybe it is something that we do to get a broader explanation of the error?

Do you know why we need it, when the function itself already throw exceptions, and when we don't do anything about the exception other than reraising it?

To my mind, if a KeyError in the function is raised, we will still get the load_and_subset_dataset function listed in the traceback and thus know, that the error had something to do with loading the dataset.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, we shouldn't need it on subset_dataset. I introduced the exceptoion handling above as a convenience in the case where the user has provided a path to a input dataset that doesn't exist. The reason is that the default exception from xarray doesn't give you the path that it tried to load, so I wrapped the call so that I could raise my own exeption that includes the path. You could just put this inside load_dataset() @ealerskans, but I think it would still be nice to wrap xr.open_dataset etc so that the user gets the path printed when xr fails to open a dataset

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have now removed the exception wrapping for subset_dataset and derive_variables. But I have kept it wrapped around the load_input_dataset function call since I didn't want to have to pass both dataset_name and path to the load_input_dataset function. Let me know what you think.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me:)

raise Exception(
f"Error subsetting dataset {dataset_name} from {path}"
) from ex

if derived_variables:
logger.info(f"Deriving variables from {dataset_name}")
try:
ds = derive_variables(
ds=ds_source,
derived_variables=derived_variables,
chunking=chunking_config,
)
except Exception as ex:
raise Exception(
ealerskans marked this conversation as resolved.
Show resolved Hide resolved
f"Error deriving variables '{', '.join(list(derived_variables.keys()))}'"
f" from dataset {dataset_name} from {path}"
) from ex

_check_dataset_attributes(
ds=ds,
expected_attributes=expected_input_attributes,
Expand Down Expand Up @@ -191,9 +223,8 @@ def create_dataset(config: Config):

# default to making a single chunk for each dimension if chunksize is not specified
mafdmi marked this conversation as resolved.
Show resolved Hide resolved
# in the config
chunking_config = config.output.chunking or {}
logger.info(f"Chunking dataset with {chunking_config}")
chunks = {d: chunking_config.get(d, int(ds[d].count())) for d in ds.dims}
chunks = {dim: chunking_config.get(dim, int(ds[dim].count())) for dim in ds.dims}
ds = ds.chunk(chunks)

splitting = config.output.splitting
Expand Down
Loading
Loading