-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
* Added PNC backend to xarray PNC is used for GEOS-Chem, CAMx, CMAQ and other atmospheric data formats that have their own file formats and meta-data conventions. It can provide a CF compliant netCDF-like interface. * Added whats-new documentation * Updating pnc_ to remove DunderArrayMixin dependency * Adding basic tests for pnc Right now, pnc is simply being tested as a reader for NetCDF3 files * Updating for flake8 compliance * flake does not like unused e * Updating pnc to PseudoNetCDF * Remove outer except * Updating pnc to PseudoNetCDF * Added open and updated init Based on shoyer review * Updated indexing and test fix Indexing supports #1899 * Added PseudoNetCDF to doc/io.rst * Changing test subtype * Changing test subtype removing pdb * pnc test case requires netcdf3only For now, pnc is only supporting the classic data model * adding backend_kwargs default as dict This ensures **mapping is possible. * Upgrading tests to CFEncodedDataTest Some tests are bypassed. PseudoNetCDF string treatment is not currently compatible with xarray. This will be addressed soon. * Not currently supporting autoclose I do not fully understand the usecase, so I have not implemented these tests. * Minor updates for flake8 * Explicit skipping Using pytest.mark.skip to skip unsupported tests * removing trailing whitespace from pytest skip * Adding pip support * Addressing comments * Bypassing pickle, mask/scale, and object These tests cause errors that do not affect desired backend performance. * Added uamiv test PseudoNetCDF reads other formats. This adds a test of uamiv to the standard test for a backend and skips mask/scale, object, and boolean tests * Adding support for autoclose ensure open must be called before accessing variable data * Adding bakcend_kwargs to all backends Most backends currently take no keywords, so an empty ditionary is appropriate. * Small tweaks to PNC backend * remove warning and update whats-new * Separating isntall and io pnc doc and updating whats new * fixing line length in test * Tests now use non-netcdf files * Removing unknown meta-data netcdf support. * flake8 cleanup * Using python 2 and 3 compat testing * Disabling mask_and_scale by default prevents inadvertent double scaling in PNC formats * consistent with 3.0.0 Updates in 3.0.1 will fix close in uamiv. * Updating readers and line length * Updating readers and line length * Updating readers and line length * Adding open_mfdataset test Testing by opening same file twice and stacking it. * Using conda version of PseudoNetCDF * Removing xfail for netcdf Mask and scale with PseudoNetCDF and NetCDF4 is not supported, but not prevented. * Moving pseudonetcdf to v0.15 * Updating what's new * Fixing open_dataarray CF options mask_and_scale is None (diagnosed by open_dataset) and decode_cf should be True
- Loading branch information
Showing
11 changed files
with
440 additions
and
17 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -20,6 +20,7 @@ dependencies: | |
- rasterio | ||
- bottleneck | ||
- zarr | ||
- pseudonetcdf>=3.0.1 | ||
- pip: | ||
- coveralls | ||
- pytest-cov | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,101 @@ | ||
from __future__ import absolute_import | ||
from __future__ import division | ||
from __future__ import print_function | ||
|
||
import functools | ||
|
||
import numpy as np | ||
|
||
from .. import Variable | ||
from ..core.pycompat import OrderedDict | ||
from ..core.utils import (FrozenOrderedDict, Frozen) | ||
from ..core import indexing | ||
|
||
from .common import AbstractDataStore, DataStorePickleMixin, BackendArray | ||
|
||
|
||
class PncArrayWrapper(BackendArray): | ||
|
||
def __init__(self, variable_name, datastore): | ||
self.datastore = datastore | ||
self.variable_name = variable_name | ||
array = self.get_array() | ||
self.shape = array.shape | ||
self.dtype = np.dtype(array.dtype) | ||
|
||
def get_array(self): | ||
self.datastore.assert_open() | ||
return self.datastore.ds.variables[self.variable_name] | ||
|
||
def __getitem__(self, key): | ||
key, np_inds = indexing.decompose_indexer( | ||
key, self.shape, indexing.IndexingSupport.OUTER_1VECTOR) | ||
|
||
with self.datastore.ensure_open(autoclose=True): | ||
array = self.get_array()[key.tuple] # index backend array | ||
|
||
if len(np_inds.tuple) > 0: | ||
# index the loaded np.ndarray | ||
array = indexing.NumpyIndexingAdapter(array)[np_inds] | ||
return array | ||
|
||
|
||
class PseudoNetCDFDataStore(AbstractDataStore, DataStorePickleMixin): | ||
"""Store for accessing datasets via PseudoNetCDF | ||
""" | ||
@classmethod | ||
def open(cls, filename, format=None, writer=None, | ||
autoclose=False, **format_kwds): | ||
from PseudoNetCDF import pncopen | ||
opener = functools.partial(pncopen, filename, **format_kwds) | ||
ds = opener() | ||
mode = format_kwds.get('mode', 'r') | ||
return cls(ds, mode=mode, writer=writer, opener=opener, | ||
autoclose=autoclose) | ||
|
||
def __init__(self, pnc_dataset, mode='r', writer=None, opener=None, | ||
autoclose=False): | ||
|
||
if autoclose and opener is None: | ||
raise ValueError('autoclose requires an opener') | ||
|
||
self._ds = pnc_dataset | ||
self._autoclose = autoclose | ||
self._isopen = True | ||
self._opener = opener | ||
self._mode = mode | ||
super(PseudoNetCDFDataStore, self).__init__() | ||
|
||
def open_store_variable(self, name, var): | ||
with self.ensure_open(autoclose=False): | ||
data = indexing.LazilyOuterIndexedArray( | ||
PncArrayWrapper(name, self) | ||
) | ||
attrs = OrderedDict((k, getattr(var, k)) for k in var.ncattrs()) | ||
return Variable(var.dimensions, data, attrs) | ||
|
||
def get_variables(self): | ||
with self.ensure_open(autoclose=False): | ||
return FrozenOrderedDict((k, self.open_store_variable(k, v)) | ||
for k, v in self.ds.variables.items()) | ||
|
||
def get_attrs(self): | ||
with self.ensure_open(autoclose=True): | ||
return Frozen(dict([(k, getattr(self.ds, k)) | ||
for k in self.ds.ncattrs()])) | ||
|
||
def get_dimensions(self): | ||
with self.ensure_open(autoclose=True): | ||
return Frozen(self.ds.dimensions) | ||
|
||
def get_encoding(self): | ||
encoding = {} | ||
encoding['unlimited_dims'] = set( | ||
[k for k in self.ds.dimensions | ||
if self.ds.dimensions[k].isunlimited()]) | ||
return encoding | ||
|
||
def close(self): | ||
if self._isopen: | ||
self.ds.close() | ||
self._isopen = False |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,31 @@ | ||
27, 1001 | ||
Henderson, Barron | ||
U.S. EPA | ||
Example file with artificial data | ||
JUST_A_TEST | ||
1, 1 | ||
2018, 04, 27, 2018, 04, 27 | ||
0 | ||
Start_UTC | ||
7 | ||
1, 1, 1, 1, 1 | ||
-9999, -9999, -9999, -9999, -9999 | ||
lat, degrees_north | ||
lon, degrees_east | ||
elev, meters | ||
TEST_ppbv, ppbv | ||
TESTM_ppbv, ppbv | ||
0 | ||
8 | ||
ULOD_FLAG: -7777 | ||
ULOD_VALUE: N/A | ||
LLOD_FLAG: -8888 | ||
LLOD_VALUE: N/A, N/A, N/A, N/A, 0.025 | ||
OTHER_COMMENTS: www-air.larc.nasa.gov/missions/etc/IcarttDataFormat.htm | ||
REVISION: R0 | ||
R0: No comments for this revision. | ||
Start_UTC, lat, lon, elev, TEST_ppbv, TESTM_ppbv | ||
43200, 41.00000, -71.00000, 5, 1.2345, 2.220 | ||
46800, 42.00000, -72.00000, 15, 2.3456, -9999 | ||
50400, 42.00000, -73.00000, 20, 3.4567, -7777 | ||
50400, 42.00000, -74.00000, 25, 4.5678, -8888 |
Binary file not shown.
Oops, something went wrong.