Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add chelsa recipe #133

Open
wants to merge 18 commits into
base: master
Choose a base branch
from
Open

Conversation

hscannell
Copy link

@hscannell hscannell commented May 2, 2022

CHELSA v2.1 is a globally downscaled climate dataset provided by the Swiss Federal Institute for Forest, Snow and Landscape Research WSL. This recipe extracts global monthly climatologies (1981-2010) for 10 variables. The outputs are returned as geoTIFFs.

The reference for this dataset is coped below.

Karger, D.N., Conrad, O., Böhner, J., Kawohl, T., Kreft, H., Soria-Auza, R.W., Zimmermann, N.E., Linder, P., Kessler, M. (2017): Climatologies at high resolution for the Earth land surface areas. Scientific Data. 4 170122. https://doi.org/10.1038/sdata.2017.122

@pangeo-forge-bot
Copy link

It looks like your meta.yaml does not conform to the specification.

            1 validation error for MetaYaml
    pangeo_notebook_version
      field required (type=value_error.missing)

Please correct your meta.yaml and commit the corrections to this PR.

@pangeo-forge-bot
Copy link

🎉 New recipe runs created for the following recipes at sha d655b4b4558d443e861cb6c3c7fd7b023a1b75ff:

@pangeo-forge-bot
Copy link

🎉 New recipe runs created for the following recipes at sha 91d431e251a08c01201c339dd425f241e8732bdc:

@cisaacstern
Copy link
Member

I'll now deploy a test run of this recipe on Pangeo Forge Cloud 🚀

@cisaacstern
Copy link
Member

/run recipe-test recipe_run_id=111

@pangeo-forge-bot
Copy link

✨ A test of your recipe chelsa-v2.1 is now running on Pangeo Forge Cloud!

I'll notify you with a comment on this thread when this test is complete. (This could be a little while...)

In the meantime, you can follow the logs for this recipe run at https://pangeo-forge.org/dashboard/recipe-run/111

recipes/chelsa/recipe.py Outdated Show resolved Hide resolved
@pangeo-forge-bot
Copy link

Pangeo Cloud told me that our test of your recipe chelsa-v2.1 failed. But don't worry, I'm sure we can fix this!

To see what error caused the failure, please review the logs at https://pangeo-forge.org/dashboard/recipe-run/111

If you haven't yet tried pruning and running your recipe locally, I suggest trying that now.

Please report back on the results of your local testing in a new comment below, and a Pangeo Forge maintainer will help you with next steps!

@hscannell
Copy link
Author

I tested the pruned recipe locally and got an error when running the recipe function.

recipe_pruned = recipe.copy_pruned()
<FilePattern {'month': 2, 'variable': 10}>

run_function = recipe_pruned.to_function()
run_function()

The error message is quite long, but I suspect it has something to do with converting tifs to zarr with Xarray within pangeo_forge_recipes/recipes/xarray_zarr.py.

~/conda/lib/python3.8/site-packages/pangeo_forge_recipes/recipes/xarray_zarr.py:111: RuntimeWarning: Failed to open Zarr store with consolidated metadata, falling back to try reading non-consolidated metadata. This is typically much slower for opening a dataset. To silence this warning, consider:
1. Consolidating metadata in this existing store with zarr.consolidate_metadata().
2. Explicitly setting consolidated=False, to avoid trying to read consolidate metadata, or
3. Explicitly setting consolidated=True, to raise an error in this case instead of falling back to try reading non-consolidated metadata.
  return xr.open_zarr(target.get_mapper())
---------------------------------------------------------------------------
FileNotFoundError                         Traceback (most recent call last)
File ~/conda/lib/python3.8/site-packages/fsspec/mapping.py:135, in FSMap.__getitem__(self, key, default)
    134 try:
--> 135     result = self.fs.cat(k)
    136 except self.missing_exceptions:

File ~/conda/lib/python3.8/site-packages/fsspec/spec.py:739, in AbstractFileSystem.cat(self, path, recursive, on_error, **kwargs)
    738 else:
--> 739     return self.cat_file(paths[0], **kwargs)

File ~/conda/lib/python3.8/site-packages/fsspec/spec.py:649, in AbstractFileSystem.cat_file(self, path, start, end, **kwargs)
    648 # explicitly set buffering off?
--> 649 with self.open(path, "rb", **kwargs) as f:
    650     if start is not None:

File ~/conda/lib/python3.8/site-packages/fsspec/spec.py:1009, in AbstractFileSystem.open(self, path, mode, block_size, cache_options, compression, **kwargs)
   1008 ac = kwargs.pop("autocommit", not self._intrans)
-> 1009 f = self._open(
   1010     path,
   1011     mode=mode,
   1012     block_size=block_size,
   1013     autocommit=ac,
   1014     cache_options=cache_options,
   1015     **kwargs,
   1016 )
   1017 if compression is not None:

File ~/conda/lib/python3.8/site-packages/fsspec/implementations/local.py:155, in LocalFileSystem._open(self, path, mode, block_size, **kwargs)
    154     self.makedirs(self._parent(path), exist_ok=True)
--> 155 return LocalFileOpener(path, mode, fs=self, **kwargs)

File ~/conda/lib/python3.8/site-packages/fsspec/implementations/local.py:250, in LocalFileOpener.__init__(self, path, mode, autocommit, fs, compression, **kwargs)
    249 self.blocksize = io.DEFAULT_BUFFER_SIZE
--> 250 self._open()

File ~/conda/lib/python3.8/site-packages/fsspec/implementations/local.py:255, in LocalFileOpener._open(self)
    254 if self.autocommit or "w" not in self.mode:
--> 255     self.f = open(self.path, mode=self.mode)
    256     if self.compression:

FileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmpjcs6z9_1/d6O56Szu/.zmetadata'

During handling of the above exception, another exception occurred:

KeyError                                  Traceback (most recent call last)
File ~/conda/lib/python3.8/site-packages/xarray/backends/zarr.py:348, in ZarrStore.open_group(cls, store, mode, synchronizer, group, consolidated, consolidate_on_close, chunk_store, storage_options, append_dim, write_region, safe_chunks, stacklevel)
    347 try:
--> 348     zarr_group = zarr.open_consolidated(store, **open_kwargs)
    349 except KeyError:

File ~/conda/lib/python3.8/site-packages/zarr/convenience.py:1187, in open_consolidated(store, metadata_key, mode, **kwargs)
   1186 # setup metadata store
-> 1187 meta_store = ConsolidatedMetadataStore(store, metadata_key=metadata_key)
   1189 # pass through

File ~/conda/lib/python3.8/site-packages/zarr/storage.py:2644, in ConsolidatedMetadataStore.__init__(self, store, metadata_key)
   2643 # retrieve consolidated metadata
-> 2644 meta = json_loads(store[metadata_key])
   2646 # check format of consolidated metadata

File ~/conda/lib/python3.8/site-packages/zarr/storage.py:545, in KVStore.__getitem__(self, key)
    544 def __getitem__(self, key):
--> 545     return self._mutable_mapping[key]

File ~/conda/lib/python3.8/site-packages/fsspec/mapping.py:139, in FSMap.__getitem__(self, key, default)
    138         return default
--> 139     raise KeyError(key)
    140 return result

KeyError: '.zmetadata'

During handling of the above exception, another exception occurred:

GroupNotFoundError                        Traceback (most recent call last)
File ~/conda/lib/python3.8/site-packages/pangeo_forge_recipes/recipes/xarray_zarr.py:442, in prepare_target(config)
    441 try:
--> 442     ds = open_target(config.storage_config.target)
    443     logger.info("Found an existing dataset in target")

File ~/conda/lib/python3.8/site-packages/pangeo_forge_recipes/recipes/xarray_zarr.py:111, in open_target(target)
    110 def open_target(target: FSSpecTarget) -> xr.Dataset:
--> 111     return xr.open_zarr(target.get_mapper())

File ~/conda/lib/python3.8/site-packages/xarray/backends/zarr.py:752, in open_zarr(store, group, synchronizer, chunks, decode_cf, mask_and_scale, decode_times, concat_characters, decode_coords, drop_variables, consolidated, overwrite_encoded_chunks, chunk_store, storage_options, decode_timedelta, use_cftime, **kwargs)
    743 backend_kwargs = {
    744     "synchronizer": synchronizer,
    745     "consolidated": consolidated,
   (...)
    749     "stacklevel": 4,
    750 }
--> 752 ds = open_dataset(
    753     filename_or_obj=store,
    754     group=group,
    755     decode_cf=decode_cf,
    756     mask_and_scale=mask_and_scale,
    757     decode_times=decode_times,
    758     concat_characters=concat_characters,
    759     decode_coords=decode_coords,
    760     engine="zarr",
    761     chunks=chunks,
    762     drop_variables=drop_variables,
    763     backend_kwargs=backend_kwargs,
    764     decode_timedelta=decode_timedelta,
    765     use_cftime=use_cftime,
    766 )
    767 return ds

File ~/conda/lib/python3.8/site-packages/xarray/backends/api.py:495, in open_dataset(filename_or_obj, engine, chunks, cache, decode_cf, mask_and_scale, decode_times, decode_timedelta, use_cftime, concat_characters, decode_coords, drop_variables, backend_kwargs, *args, **kwargs)
    494 overwrite_encoded_chunks = kwargs.pop("overwrite_encoded_chunks", None)
--> 495 backend_ds = backend.open_dataset(
    496     filename_or_obj,
    497     drop_variables=drop_variables,
    498     **decoders,
    499     **kwargs,
    500 )
    501 ds = _dataset_from_backend_dataset(
    502     backend_ds,
    503     filename_or_obj,
   (...)
    510     **kwargs,
    511 )

File ~/conda/lib/python3.8/site-packages/xarray/backends/zarr.py:800, in ZarrBackendEntrypoint.open_dataset(self, filename_or_obj, mask_and_scale, decode_times, concat_characters, decode_coords, drop_variables, use_cftime, decode_timedelta, group, mode, synchronizer, consolidated, chunk_store, storage_options, stacklevel)
    799 filename_or_obj = _normalize_path(filename_or_obj)
--> 800 store = ZarrStore.open_group(
    801     filename_or_obj,
    802     group=group,
    803     mode=mode,
    804     synchronizer=synchronizer,
    805     consolidated=consolidated,
    806     consolidate_on_close=False,
    807     chunk_store=chunk_store,
    808     storage_options=storage_options,
    809     stacklevel=stacklevel + 1,
    810 )
    812 store_entrypoint = StoreBackendEntrypoint()

File ~/conda/lib/python3.8/site-packages/xarray/backends/zarr.py:365, in ZarrStore.open_group(cls, store, mode, synchronizer, group, consolidated, consolidate_on_close, chunk_store, storage_options, append_dim, write_region, safe_chunks, stacklevel)
    350         warnings.warn(
    351             "Failed to open Zarr store with consolidated metadata, "
    352             "falling back to try reading non-consolidated metadata. "
   (...)
    363             stacklevel=stacklevel,
    364         )
--> 365         zarr_group = zarr.open_group(store, **open_kwargs)
    366 elif consolidated:
    367     # TODO: an option to pass the metadata_key keyword

File ~/conda/lib/python3.8/site-packages/zarr/hierarchy.py:1182, in open_group(store, mode, cache_attrs, synchronizer, path, chunk_store, storage_options)
   1181             raise ContainsArrayError(path)
-> 1182         raise GroupNotFoundError(path)
   1184 elif mode == 'w':

GroupNotFoundError: group not found at path ''

During handling of the above exception, another exception occurred:

KeyError                                  Traceback (most recent call last)
File ~/conda/lib/python3.8/site-packages/xarray/backends/file_manager.py:199, in CachingFileManager._acquire_with_cache_info(self, needs_lock)
    198 try:
--> 199     file = self._cache[self._key]
    200 except KeyError:

File ~/conda/lib/python3.8/site-packages/xarray/backends/lru_cache.py:53, in LRUCache.__getitem__(self, key)
     52 with self._lock:
---> 53     value = self._cache[key]
     54     self._cache.move_to_end(key)

KeyError: [<class 'h5netcdf.core.File'>, ('/tmp/tmpjcs6z9_1/MiLXa92p/894dbe4436b9a4e6ef859338e67d77ae-https_os.zhdk.cloud.switch.ch_envicloud_chelsa_chelsa_v2_global_climatologies_1981-2010_cmi_chelsa_cmi_01_1981-2010_v.2.1.tif',), 'r', (('decode_vlen_strings', True), ('invalid_netcdf', None))]

During handling of the above exception, another exception occurred:

OSError                                   Traceback (most recent call last)
Input In [7], in <cell line: 2>()
      1 run_function = recipe_pruned.to_function()
----> 2 run_function()

File ~/conda/lib/python3.8/site-packages/pangeo_forge_recipes/executors/python.py:46, in FunctionPipelineExecutor.compile.<locals>.function()
     44         stage.function(m, config=pipeline.config)
     45 else:
---> 46     stage.function(config=pipeline.config)

File ~/conda/lib/python3.8/site-packages/pangeo_forge_recipes/recipes/xarray_zarr.py:464, in prepare_target(config)
    461 init_chunks = list(filter(filter_init_chunks, config.iter_chunks()))
    463 for chunk_key in init_chunks:
--> 464     with open_chunk(chunk_key, config=config) as ds:
    465         # ds is already chunked
    466 
    467         # https://github.com/pydata/xarray/blob/5287c7b2546fc8848f539bb5ee66bb8d91d8496f/xarray/core/variable.py#L1069
    468         for v in ds.variables:
    469             if config.target_chunks:

File ~/conda/lib/python3.8/contextlib.py:113, in _GeneratorContextManager.__enter__(self)
    111 del self.args, self.kwds, self.func
    112 try:
--> 113     return next(self.gen)
    114 except StopIteration:
    115     raise RuntimeError("generator didn't yield") from None

File ~/conda/lib/python3.8/site-packages/pangeo_forge_recipes/recipes/xarray_zarr.py:339, in open_chunk(chunk_key, config)
    337 # need to open an unknown number of contexts at the same time
    338 with ExitStack() as stack:
--> 339     dsets = [stack.enter_context(open_input(input_key, config=config)) for input_key in inputs]
    341     # subset before chunking; hopefully lazy
    342     subset_dims = [dim_idx for dim_idx in chunk_key if dim_idx.operation == CombineOp.SUBSET]

File ~/conda/lib/python3.8/site-packages/pangeo_forge_recipes/recipes/xarray_zarr.py:339, in <listcomp>(.0)
    337 # need to open an unknown number of contexts at the same time
    338 with ExitStack() as stack:
--> 339     dsets = [stack.enter_context(open_input(input_key, config=config)) for input_key in inputs]
    341     # subset before chunking; hopefully lazy
    342     subset_dims = [dim_idx for dim_idx in chunk_key if dim_idx.operation == CombineOp.SUBSET]

File ~/conda/lib/python3.8/contextlib.py:425, in _BaseExitStack.enter_context(self, cm)
    423 _cm_type = type(cm)
    424 _exit = _cm_type.__exit__
--> 425 result = _cm_type.__enter__(cm)
    426 self._push_cm_exit(cm, _exit)
    427 return result

File ~/conda/lib/python3.8/contextlib.py:113, in _GeneratorContextManager.__enter__(self)
    111 del self.args, self.kwds, self.func
    112 try:
--> 113     return next(self.gen)
    114 except StopIteration:
    115     raise RuntimeError("generator didn't yield") from None

File ~/conda/lib/python3.8/site-packages/pangeo_forge_recipes/recipes/xarray_zarr.py:303, in open_input(input_key, config)
    301     kw["engine"] = "h5netcdf"
    302 logger.debug(f"about to enter xr.open_dataset context on {f}")
--> 303 with xr.open_dataset(f, **kw) as ds:
    304     logger.debug("successfully opened dataset")
    305     ds = fix_scalar_attr_encoding(ds)

File ~/conda/lib/python3.8/site-packages/xarray/backends/api.py:495, in open_dataset(filename_or_obj, engine, chunks, cache, decode_cf, mask_and_scale, decode_times, decode_timedelta, use_cftime, concat_characters, decode_coords, drop_variables, backend_kwargs, *args, **kwargs)
    483 decoders = _resolve_decoders_kwargs(
    484     decode_cf,
    485     open_backend_dataset_parameters=backend.open_dataset_parameters,
   (...)
    491     decode_coords=decode_coords,
    492 )
    494 overwrite_encoded_chunks = kwargs.pop("overwrite_encoded_chunks", None)
--> 495 backend_ds = backend.open_dataset(
    496     filename_or_obj,
    497     drop_variables=drop_variables,
    498     **decoders,
    499     **kwargs,
    500 )
    501 ds = _dataset_from_backend_dataset(
    502     backend_ds,
    503     filename_or_obj,
   (...)
    510     **kwargs,
    511 )
    512 return ds

File ~/conda/lib/python3.8/site-packages/xarray/backends/h5netcdf_.py:387, in H5netcdfBackendEntrypoint.open_dataset(self, filename_or_obj, mask_and_scale, decode_times, concat_characters, decode_coords, drop_variables, use_cftime, decode_timedelta, format, group, lock, invalid_netcdf, phony_dims, decode_vlen_strings)
    367 def open_dataset(
    368     self,
    369     filename_or_obj,
   (...)
    383     decode_vlen_strings=True,
    384 ):
    386     filename_or_obj = _normalize_path(filename_or_obj)
--> 387     store = H5NetCDFStore.open(
    388         filename_or_obj,
    389         format=format,
    390         group=group,
    391         lock=lock,
    392         invalid_netcdf=invalid_netcdf,
    393         phony_dims=phony_dims,
    394         decode_vlen_strings=decode_vlen_strings,
    395     )
    397     store_entrypoint = StoreBackendEntrypoint()
    399     ds = store_entrypoint.open_dataset(
    400         store,
    401         mask_and_scale=mask_and_scale,
   (...)
    407         decode_timedelta=decode_timedelta,
    408     )

File ~/conda/lib/python3.8/site-packages/xarray/backends/h5netcdf_.py:179, in H5NetCDFStore.open(cls, filename, mode, format, group, lock, autoclose, invalid_netcdf, phony_dims, decode_vlen_strings)
    176         lock = combine_locks([HDF5_LOCK, get_write_lock(filename)])
    178 manager = CachingFileManager(h5netcdf.File, filename, mode=mode, kwargs=kwargs)
--> 179 return cls(manager, group=group, mode=mode, lock=lock, autoclose=autoclose)

File ~/conda/lib/python3.8/site-packages/xarray/backends/h5netcdf_.py:130, in H5NetCDFStore.__init__(self, manager, group, mode, lock, autoclose)
    127 self.format = None
    128 # todo: utilizing find_root_and_group seems a bit clunky
    129 #  making filename available on h5netcdf.Group seems better
--> 130 self._filename = find_root_and_group(self.ds)[0].filename
    131 self.is_remote = is_remote_uri(self._filename)
    132 self.lock = ensure_lock(lock)

File ~/conda/lib/python3.8/site-packages/xarray/backends/h5netcdf_.py:190, in H5NetCDFStore.ds(self)
    188 @property
    189 def ds(self):
--> 190     return self._acquire()

File ~/conda/lib/python3.8/site-packages/xarray/backends/h5netcdf_.py:182, in H5NetCDFStore._acquire(self, needs_lock)
    181 def _acquire(self, needs_lock=True):
--> 182     with self._manager.acquire_context(needs_lock) as root:
    183         ds = _nc4_require_group(
    184             root, self._group, self._mode, create_group=_h5netcdf_create_group
    185         )
    186     return ds

File ~/conda/lib/python3.8/contextlib.py:113, in _GeneratorContextManager.__enter__(self)
    111 del self.args, self.kwds, self.func
    112 try:
--> 113     return next(self.gen)
    114 except StopIteration:
    115     raise RuntimeError("generator didn't yield") from None

File ~/conda/lib/python3.8/site-packages/xarray/backends/file_manager.py:187, in CachingFileManager.acquire_context(self, needs_lock)
    184 @contextlib.contextmanager
    185 def acquire_context(self, needs_lock=True):
    186     """Context manager for acquiring a file."""
--> 187     file, cached = self._acquire_with_cache_info(needs_lock)
    188     try:
    189         yield file

File ~/conda/lib/python3.8/site-packages/xarray/backends/file_manager.py:205, in CachingFileManager._acquire_with_cache_info(self, needs_lock)
    203     kwargs = kwargs.copy()
    204     kwargs["mode"] = self._mode
--> 205 file = self._opener(*self._args, **kwargs)
    206 if self._mode == "w":
    207     # ensure file doesn't get overridden when opened again
    208     self._mode = "a"

File ~/conda/lib/python3.8/site-packages/h5netcdf/core.py:995, in File.__init__(self, path, mode, invalid_netcdf, phony_dims, **kwargs)
    993     else:
    994         self._preexisting_file = os.path.exists(path) and mode != "w"
--> 995         self._h5file = h5py.File(
    996             path, mode, track_order=track_order, **kwargs
    997         )
    998 else:  # file-like object
    999     if version.parse(h5py.__version__) < version.parse("2.9.0"):

File ~/conda/lib/python3.8/site-packages/h5py/_hl/files.py:507, in File.__init__(self, name, mode, driver, libver, userblock_size, swmr, rdcc_nslots, rdcc_nbytes, rdcc_w0, track_order, fs_strategy, fs_persist, fs_threshold, fs_page_size, page_buf_size, min_meta_keep, min_raw_keep, locking, **kwds)
    502     fapl = make_fapl(driver, libver, rdcc_nslots, rdcc_nbytes, rdcc_w0,
    503                      locking, page_buf_size, min_meta_keep, min_raw_keep, **kwds)
    504     fcpl = make_fcpl(track_order=track_order, fs_strategy=fs_strategy,
    505                      fs_persist=fs_persist, fs_threshold=fs_threshold,
    506                      fs_page_size=fs_page_size)
--> 507     fid = make_fid(name, mode, userblock_size, fapl, fcpl, swmr=swmr)
    509 if isinstance(libver, tuple):
    510     self._libver = libver

File ~/conda/lib/python3.8/site-packages/h5py/_hl/files.py:220, in make_fid(name, mode, userblock_size, fapl, fcpl, swmr)
    218     if swmr and swmr_support:
    219         flags |= h5f.ACC_SWMR_READ
--> 220     fid = h5f.open(name, flags, fapl=fapl)
    221 elif mode == 'r+':
    222     fid = h5f.open(name, h5f.ACC_RDWR, fapl=fapl)

File h5py/_objects.pyx:54, in h5py._objects.with_phil.wrapper()

File h5py/_objects.pyx:55, in h5py._objects.with_phil.wrapper()

File h5py/h5f.pyx:106, in h5py.h5f.open()

OSError: Unable to open file (file signature not found)

@cisaacstern
Copy link
Member

This error actually appears to be related to #133 (comment): xarray is trying to open the source file with the h5netcdf backend (our default) because it doesn't know to use rasterio instead. Passing

recipe = XarrayZarrRecipe(
     # etc
     xarray_open_kwargs={"engine": "rasterio"},
}

should move us past this particular error.

Ryan's suggestion in the linked comment to download one of the source files and open it with xarray is a good way to see which (if any) additional xarray_open_kwargs are required aside from engine.

@hscannell
Copy link
Author

One problem I see is that when using engine="rasterio", xarray doesn't load the attributes, whereas using rioxarray does.

import xarray as xr
import rioxarray as rxr

url = 'https://os.zhdk.cloud.switch.ch/envicloud/chelsa/chelsa_V2/GLOBAL/climatologies/1981-2010/tasmax/CHELSA_tasmax_01_1981-2010_V.2.1.tif'

da1 = xr.open_dataset(url, engine="rasterio") 
print(f'Opening with xarray... \n {da1} \n')

da2 = rxr.open_rasterio(url)
print(f'Opening with rioxarray... \n {da2}')

Opening with xarray... 
 <xarray.Dataset>
Dimensions:      (band: 1, x: 43200, y: 20880)
Coordinates:
  * band         (band) int64 1
  * x            (x) float64 -180.0 -180.0 -180.0 -180.0 ... 180.0 180.0 180.0
  * y            (y) float64 84.0 83.99 83.98 83.97 ... -89.98 -89.99 -90.0
    spatial_ref  int64 ...
Data variables:
    band_data    (band, y, x) float32 ... 


Opening with rioxarray... 
 <xarray.DataArray (band: 1, y: 20880, x: 43200)>
[902016000 values with dtype=uint16]
Coordinates:
  * band         (band) int64 1
  * x            (x) float64 -180.0 -180.0 -180.0 -180.0 ... 180.0 180.0 180.0
  * y            (y) float64 84.0 83.99 83.98 83.97 ... -89.98 -89.99 -90.0
    spatial_ref  int64 0
Attributes:
    scale_factor:  0.1
    add_offset:    -273.15

@andersy005
Copy link
Member

andersy005 commented May 2, 2022

One problem I see is that when using engine="rasterio", xarray doesn't load the attributes, whereas using rioxarray does.

it appears that xr.open_dataset(..., engine='rasterio') is returning values with dtype=float32 while rioxarray.open_rasterio(...) is returning values with dtype=uint16. Under the hood, xr.open_dataset(..., engine='rasterio') is applying the mask and scale (see https://github.com/corteva/rioxarray/blob/dc8efe36d9e72c93bd20be759b5e464190ba238f/rioxarray/xarray_plugin.py#L45). During this conversion process, xarray is throwing the attributes away.

rioxarray.open_rasterio has mask_and_scale set to False by default. Setting this value to False via backend_kwargs should return the same output as rioxarray.open_rasterio()

In [35]: da1 = xr.open_dataset(url, engine="rasterio", backend_kwargs={'mask_and_scale': False})

In [36]: da1
Out[36]: 
<xarray.Dataset>
Dimensions:      (band: 1, x: 43200, y: 20880)
Coordinates:
  * band         (band) int64 1
  * x            (x) float64 -180.0 -180.0 -180.0 -180.0 ... 180.0 180.0 180.0
  * y            (y) float64 84.0 83.99 83.98 83.97 ... -89.98 -89.99 -90.0
    spatial_ref  int64 ...
Data variables:
    band_data    (band, y, x) uint16 ...

In [37]: da1.band_data.attrs
Out[37]: {'scale_factor': 0.1, 'add_offset': -273.15}

@cisaacstern
Copy link
Member

Wow, thanks for this insight, @andersy005!

Hillary, if this looks workable to you, you should be able to achieve what Anderson just demonstrated via

recipe = XarrayZarrRecipe(
     # etc
     xarray_open_kwargs={"engine": "rasterio", "backend_kwargs": {"mask_and_scale": False}},
}

The forthcoming pangeo-forge/pangeo-forge-recipes#245 would allow you to use rxr.open_rasterio as a custom opener, but that's still a little ways off, so we shouldn't wait for it if we can move forward with the kwargs Anderson proposed.

@andersy005
Copy link
Member

During this conversion process, xarray is throwing the attributes away.

I spoke too soon. Xarray keeps the scale_factor and offset in .encoding after the conversion process: (see docs) since these are used to round trip the IO

In [42]: da1 = xr.open_dataset(url, engine="rasterio")

In [43]: da1
Out[43]: 
<xarray.Dataset>
Dimensions:      (band: 1, x: 43200, y: 20880)
Coordinates:
  * band         (band) int64 1
  * x            (x) float64 -180.0 -180.0 -180.0 -180.0 ... 180.0 180.0 180.0
  * y            (y) float64 84.0 83.99 83.98 83.97 ... -89.98 -89.99 -90.0
    spatial_ref  int64 ...
Data variables:
    band_data    (band, y, x) float32 ...

In [44]: da1.band_data.attrs
Out[44]: {}

In [45]: da1.band_data.encoding
Out[45]: 
{'dtype': 'uint16',
 'scale_factor': 0.1,
 'add_offset': -273.15,
 'grid_mapping': 'spatial_ref',
 'source': 'https://os.zhdk.cloud.switch.ch/envicloud/chelsa/chelsa_V2/GLOBAL/climatologies/1981-2010/tasmax/CHELSA_tasmax_01_1981-2010_V.2.1.tif',
 'rasterio_dtype': 'uint16'}

@rabernat
Copy link
Contributor

rabernat commented May 2, 2022

{"mask_and_scale": True} (the default) seems like the behavior we want though, right? NetCDF files also commonly have mask and scale. Applying this is an important part of Xarray's encoding / decoding machinery.

For creating multi-file zarrs from many individual files, I think the right behavior is to decode all the variables. Imagine we had two datasets with different scale_factor that we were concatenating into a single dataset. If we did not decode them (applying mask_and_scale), and naively kept just one of the scale_factors, the data would effectively become corrupted. Better to decode all variables and let Zarr handle encoding compression.

PFR automatically deletes all encoding on the inputs here:
https://github.com/pangeo-forge/pangeo-forge-recipes/blob/cc1cb7cb42de0c587e81d2ce1ec9da0dd6e74e15/pangeo_forge_recipes/recipes/xarray_zarr.py#L280-L282

@cisaacstern
Copy link
Member

cisaacstern commented May 3, 2022

To summarize, starting from Hillary's comment #133 (comment): for the provided url, rxr.open_rasterio returns a DataArray with variable encoding information preserved in its attributes. By contrast, xr.open_dataset returns a Dataset with a single variable, which Anderson showed us contained this same information in the data variable's .encoding.

IIUC based on Ryan's comment, this .encoding will be deleted by pangeo-forge-recipes before writing to Zarr. But perhaps we don't actually mind if that happens?

I propose we try the inexpensive test of just running this recipe (in test form) with xarray_open_kwargs={"engine": "rasterio"} and seeing what happens.

@hscannell, if that makes sense as a next step to you, could you push a commit adding this kwarg to the recipe? If the zarr store produced by the test is missing key attributes, we can address that before merging this PR.

@rabernat
Copy link
Contributor

rabernat commented May 3, 2022

But perhaps we don't actually mind if that happens?

We want that to happen. It's a feature, not a bug.

Mask and scale attributes are not proper "metadata" in the same way that, say, license or provenance are. They can be thought of more as compression codecs.

The default behavior of xr.open_dataset is correct here IMO.

@hscannell
Copy link
Author

Thank you @andersy005 for helping us figure this out! I didn't know that the attributes could be called from ds['var'].encoding.

The desired result would be for the data to be scaled and offset using this encoding. @cisaacstern and @rabernat I assume Zarr and therefore PFR knows how to decode this information. The desired result should ultimately do the following, with a sanity check plot for max June surface air temperature.

import xarray as xr

url = 'https://os.zhdk.cloud.switch.ch/envicloud/chelsa/chelsa_V2/GLOBAL/climatologies/1981-2010/tasmax/CHELSA_tasmax_06_1981-2010_V.2.1.tif'
ds = xr.open_dataset(url, engine="rasterio") 

decoded_data = (ds.band_data[0,1000:5000,1000:5000]*ds.band_data.encoding['scale_factor'])+ds.band_data.encoding['add_offset']
decoded_data.plot(vmin=0, vmax=20)

image

@pangeo-forge-bot
Copy link

It looks like there may be a problem with the structure of your PR.

I encountered a FilesChangedOutsideRecipesSubdirError('This PR changes files outside the recipes/ directory.').

@cisaacstern
Copy link
Member

cisaacstern commented May 3, 2022

Yikes so pre-commit is wreaking havoc here. I think we need to uninstall it for staged-recipes, and then revert. I'll do that now.

@pangeo-forge-bot
Copy link

It looks like there may be a problem with the structure of your PR.

I encountered a FilesChangedOutsideRecipesSubdirError('This PR changes files outside the recipes/ directory.').

@hscannell
Copy link
Author

I'm really confused why this PR is now changing other recipes 🤔

@cisaacstern
Copy link
Member

Yes, sorry about that! It's because the pre-commit app was installed in the pangeo-forge org since the last commit, and when I just made a commit via the GitHub web interface, pre-commit ran through all files to update formatting. If you hold off on committing for a few minutes, I can fix that now, and I'll let you know with another comment here when it's good to go.

@cisaacstern
Copy link
Member

Ok! I believe we should be back on track here, and apologies again for the confusion. As soon as @pangeo-forge-bot creates a new recipe run for us, I'll trigger a new test of the recipe.

@pangeo-forge-bot
Copy link

🎉 New recipe runs created for the following recipes at sha 3df2c842d32b548316c93e716823b59a0a805ec3:

@cisaacstern
Copy link
Member

/run recipe-test recipe_run_id=144

@pangeo-forge-bot
Copy link

✨ A test of your recipe chelsa-v2.1 is now running on Pangeo Forge Cloud!

I'll notify you with a comment on this thread when this test is complete. (This could be a little while...)

In the meantime, you can follow the logs for this recipe run at https://pangeo-forge.org/dashboard/recipe-run/144

@pangeo-forge-bot
Copy link

Pangeo Cloud told me that our test of your recipe chelsa-v2.1 failed. But don't worry, I'm sure we can fix this!

To see what error caused the failure, please review the logs at https://pangeo-forge.org/dashboard/recipe-run/144

If you haven't yet tried pruning and running your recipe locally, I suggest trying that now.

Please report back on the results of your local testing in a new comment below, and a Pangeo Forge maintainer will help you with next steps!

@andersy005
Copy link
Member

pre-commit.ci autofix

@andersy005
Copy link
Member

/run chelsa-v2.1

@pangeo-forge
Copy link
Contributor

pangeo-forge bot commented Oct 25, 2022

The test failed, but I'm sure we can find out why!

Pangeo Forge maintainers are working diligently to provide public logs for contributors.
That feature is not quite ready yet, however, so please reach out on this thread to a
maintainer, and they'll help you diagnose the problem.

@andersy005
Copy link
Member

/run chelsa-v2.1

@pangeo-forge
Copy link
Contributor

pangeo-forge bot commented Oct 25, 2022

The test failed, but I'm sure we can find out why!

Pangeo Forge maintainers are working diligently to provide public logs for contributors.
That feature is not quite ready yet, however, so please reach out on this thread to a
maintainer, and they'll help you diagnose the problem.

@andersy005
Copy link
Member

The failure seems to be related to the same serialization issue reported in #145 (comment)

Traceback (most recent call last):
  File "/srv/conda/envs/notebook/lib/python3.9/site-packages/apache_beam/runners/worker/sdk_worker.py", line 284, in _execute
    response = task()
  File "/srv/conda/envs/notebook/lib/python3.9/site-packages/apache_beam/runners/worker/sdk_worker.py", line 357, in <lambda>
    lambda: self.create_worker().do_instruction(request), request)
  File "/srv/conda/envs/notebook/lib/python3.9/site-packages/apache_beam/runners/worker/sdk_worker.py", line 597, in do_instruction
    return getattr(self, request_type)(
  File "/srv/conda/envs/notebook/lib/python3.9/site-packages/apache_beam/runners/worker/sdk_worker.py", line 635, in process_bundle
    bundle_processor.process_bundle(instruction_id))
  File "/srv/conda/envs/notebook/lib/python3.9/site-packages/apache_beam/runners/worker/bundle_processor.py", line 1003, in process_bundle
    input_op_by_transform_id[element.transform_id].process_encoded(
  File "/srv/conda/envs/notebook/lib/python3.9/site-packages/apache_beam/runners/worker/bundle_processor.py", line 227, in process_encoded
    self.output(decoded_value)
  File "apache_beam/runners/worker/operations.py", line 526, in apache_beam.runners.worker.operations.Operation.output
  File "apache_beam/runners/worker/operations.py", line 528, in apache_beam.runners.worker.operations.Operation.output
  File "apache_beam/runners/worker/operations.py", line 237, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive
  File "apache_beam/runners/worker/operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive
  File "apache_beam/runners/worker/operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process
  File "apache_beam/runners/worker/operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process
  File "apache_beam/runners/common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process
  File "apache_beam/runners/common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented
  File "apache_beam/runners/common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process
  File "apache_beam/runners/common.py", line 623, in apache_beam.runners.common.SimpleInvoker.invoke_process
  File "apache_beam/runners/common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs
  File "apache_beam/runners/common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag
  File "apache_beam/runners/worker/operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive
  File "apache_beam/runners/worker/operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process
  File "apache_beam/runners/worker/operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process
  File "apache_beam/runners/common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process
  File "apache_beam/runners/common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented
  File "apache_beam/runners/common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process
  File "apache_beam/runners/common.py", line 623, in apache_beam.runners.common.SimpleInvoker.invoke_process
  File "apache_beam/runners/common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs
  File "apache_beam/runners/common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag
  File "apache_beam/runners/worker/operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive
  File "apache_beam/runners/worker/operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process
  File "apache_beam/runners/worker/operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process
  File "apache_beam/runners/common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process
  File "apache_beam/runners/common.py", line 1507, in apache_beam.runners.common.DoFnRunner._reraise_augmented
  File "apache_beam/runners/common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process
  File "apache_beam/runners/common.py", line 837, in apache_beam.runners.common.PerWindowInvoker.invoke_process
  File "apache_beam/runners/common.py", line 983, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window
  File "/usr/local/lib/python3.9/dist-packages/apache_beam/transforms/core.py", line 1877, in <lambda>
  File "/srv/conda/envs/notebook/lib/python3.9/site-packages/pangeo_forge_recipes/executors/beam.py", line 14, in _no_arg_stage
    fun(config=config)
  File "/srv/conda/envs/notebook/lib/python3.9/site-packages/pangeo_forge_recipes/recipes/xarray_zarr.py", line 587, in prepare_target
    for k, v in config.get_execution_context().items():
  File "/srv/conda/envs/notebook/lib/python3.9/site-packages/pangeo_forge_recipes/recipes/base.py", line 59, in get_execution_context
    recipe_hash=self.sha256().hex(),
  File "/srv/conda/envs/notebook/lib/python3.9/site-packages/pangeo_forge_recipes/recipes/base.py", line 53, in sha256
    return dataclass_sha256(self, ignore_keys=self._hash_exclude_)
  File "/srv/conda/envs/notebook/lib/python3.9/site-packages/pangeo_forge_recipes/serialization.py", line 73, in dataclass_sha256
    return dict_to_sha256(d)
  File "/srv/conda/envs/notebook/lib/python3.9/site-packages/pangeo_forge_recipes/serialization.py", line 34, in dict_to_sha256
    b = dumps(
  File "/srv/conda/envs/notebook/lib/python3.9/json/__init__.py", line 234, in dumps
    return cls(
  File "/srv/conda/envs/notebook/lib/python3.9/json/encoder.py", line 199, in encode
    chunks = self.iterencode(o, _one_shot=True)
  File "/srv/conda/envs/notebook/lib/python3.9/json/encoder.py", line 257, in iterencode
    return _iterencode(o, 0)
  File "/srv/conda/envs/notebook/lib/python3.9/site-packages/pangeo_forge_recipes/serialization.py", line 22, in either_encode_or_hash
    return inspect.getsource(obj)
  File "/srv/conda/envs/notebook/lib/python3.9/inspect.py", line 1024, in getsource
    lines, lnum = getsourcelines(object)
  File "/srv/conda/envs/notebook/lib/python3.9/inspect.py", line 1006, in getsourcelines
    lines, lnum = findsource(object)
  File "/srv/conda/envs/notebook/lib/python3.9/inspect.py", line 827, in findsource
    raise OSError('source code not available')
OSError: source code not available [while running 'Start|cache_input|Reshuffle_000|prepare_target|Reshuffle_001|store_chunk|Reshuffle_002|finalize_target|Reshuffle_003/prepare_target-ptransform-56']

@andersy005
Copy link
Member

/run chelsa-v2.1

@pangeo-forge
Copy link
Contributor

pangeo-forge bot commented Oct 27, 2022

🎉 The test run of chelsa-v2.1 at 95e1ca9 succeeded!

import xarray as xr

store = "https://ncsa.osn.xsede.org/Pangeo/pangeo-forge/test/pangeo-forge/staged-recipes/recipe-run-1332/chelsa-v2.1.zarr"
ds = xr.open_dataset(store, engine='zarr', chunks={})
ds

@andersy005
Copy link
Member

@hscannell, the latest run seems to have succeeded. https://pangeo-forge.org/dashboard/recipe-run/1332?feedstock_id=1

let us know if this is ready, and we shall merge it into main.

@andersy005 andersy005 mentioned this pull request Nov 1, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants