Skip to content

Commit

Permalink
update notebook, fixing typos
Browse files Browse the repository at this point in the history
  • Loading branch information
betolink committed Jul 17, 2024
1 parent 4d3cd3f commit 83d97cb
Showing 1 changed file with 95 additions and 19 deletions.
114 changes: 95 additions & 19 deletions intermediate/remote_data/remote-data.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,11 @@
"source": [
"# Access Patterns to Remote Data with *fsspec*\n",
"\n",
"Accessing remote data with xarray usually means working with cloud-optimized formats like Zarr or COGs, the [CMIP6 tutorial](remote-data.ipynb) shows this pattern in detail. These formats were designed to be efficiently accessed over the internet, however in many cases we might need to access data that is not available in such formats.\n",
"Accessing remote data with xarray usually means working with cloud-optimized formats like Zarr or COGs, the [CMIP6 tutorial](https://tutorial.xarray.dev/intermediate/remote_data/cmip6-cloud.html) shows this pattern in detail. These formats were designed to be efficiently accessed over the internet, however in many cases we might need to access data that is not available in such formats.\n",
"\n",
"This notebook will explore how we can leverage xarray's backends to access remote files. For this we will make use of [`fsspec`](https://github.com/fsspec/filesystem_spec), a powerful Python library that abstracts the internal implementation of remote storage systems into a uniform API that can be used by many file-format specific libraries.\n",
"This notebook will explore how we can leverage xarray's backends to access remote files. For this we will make use of [`fsspec`](https://github.com/fsspec/filesystem_spec), a powerful Python library that abstracts the internal implementation of remote storage systems into a uniform API that can be used by many format specific libraries.\n",
"\n",
"Before starting with remote data, it may be helpful to understand how xarray handles local files and how xarray backends work. The following diagram shows the different components involved in accessing data either locally or remote using the `h5netcdf` backend which uses a format specific library to access HDF5 files.\n",
"Before starting with remote data, it may be helpful to understand how xarray handles local files and how xarray backends work. The following diagram shows the different components involved in accessing data either locally or remote using the `h5netcdf` backend with fsspec which uses a format specific library to access HDF5 files.\n",
"\n",
"![xarray-access(3)](https://gist.github.com/assets/717735/3c3c6801-11ed-43a4-98ea-636b7dd612d8)\n",
"\n",
Expand Down Expand Up @@ -59,7 +59,7 @@
"\n",
"\n",
"tracing_output = []\n",
"_match_pattern = \"xarray\"\n",
"_match_pattern = \"xarray/\"\n",
"\n",
"\n",
"def trace_calls(frame, event, arg):\n",
Expand Down Expand Up @@ -228,7 +228,9 @@
"source": [
"## Remote Access and File Caching\n",
"\n",
"When we use fsspec to abstract a remote file we are in essence translating byte requests to HTTP range requests over the internet. An HTTP request is a costly I/O operation compared to accessing a local file. Because of this, it's common that libraries that handle over the network data transfers implement a cache to avoid requesting the same data over and over. In the case of fsspec there are different ways to ask the library to handle this **caching and this is one of the most relevant performance considerations** when we work with xarray and remote data.\n",
"When we use fsspec to abstract a remote file we are in essence translating byte requests to HTTP range requests over the internet. An HTTP request is a costly I/O operation compared to accessing a local file. Because of this, it's common that libraries that handle over the network data transfers implement a cache to avoid requesting the same data over and over. In the case of fsspec there are different ways to ask the library to handle this. \n",
"\n",
"> **NOTE**: Caching is one of the most relevant performance considerations when we work with xarray and remote data.\n",
"\n",
"fsspec default cache is called `read-ahead` and as its name suggests it will read ahead of our request a fixed amount of bytes, this is good when we are working with text or tabular data but it's really an anti pattern when we work with scientific data formats. Benchmarks show that any of the caching schemas will perform better than using the default `read-ahead`.\n",
"\n",
Expand Down Expand Up @@ -261,11 +263,9 @@
"id": "15",
"metadata": {},
"source": [
"#### block cache + `open()`\n",
"\n",
"If our backend support reading from a buffer we can cache only the parts of the file that we are reading, this is useful but tricky. As we mentioned before fsspec default cache will request an overhead of 5MB ahead of the byte offset we request, and if we are reading small chunks from our file it will be really slow and incur in unnecessary transfers.\n",
"### `open()` remotely + caching strategies\n",
"\n",
"Let's open the same file but using the `h5netcdf` engine and we'll use a block cache strategy that stores predefined block sizes from our remote file.\n"
"If our backend support reading from a buffer we can cache only the parts of the file that we are reading, this is useful but tricky. As we mentioned before fsspec default cache will request an overhead of 5MB ahead of the byte offset we request, and if we are reading small chunks from our file it will be really slow and incur in unnecessary transfers."
]
},
{
Expand All @@ -280,20 +280,72 @@
"\n",
"fs = fsspec.filesystem('http')\n",
"\n",
"fsspec_caching = {\n",
"# Note that if we use a context, we'll close the file after the block so operations on xarray may fail if we don't load our data arrays.\n",
"with fs.open(uri) as file:\n",
" ds = xr.open_dataset(file, engine=\"h5netcdf\")\n",
" mean = ds.sst.mean()\n",
" default_cache_info = file.cache\n",
"ds"
]
},
{
"cell_type": "markdown",
"id": "e144b95c-a10c-401b-b142-f9288e9ddf2a",
"metadata": {},
"source": [
"### Using one of the many fsspec caching implementations.\n",
"\n",
"The file we are working with is small and we don't really see the performance implications of using the default caching vs better caching strategies. Now we are going to open the same file using a `block cache` strategy that stores predefined block sizes from our remote file.\n",
"\n",
"> **Note**: For a list of caching implementations see [fsspec API docs](https://filesystem-spec.readthedocs.io/en/latest/api.html#read-buffering)\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c8d1d597-8245-405f-b6b4-517bdb971b4e",
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"uri = \"https://its-live-data.s3-us-west-2.amazonaws.com/test-space/sample-data/sst.mnmean.nc\"\n",
"\n",
"fs = fsspec.filesystem('http')\n",
"\n",
"fsspec_better_caching = {\n",
" \"cache_type\": \"blockcache\", # block cache stores blocks of fixed size and uses eviction using a LRU strategy.\n",
" \"block_size\": 8\n",
" * 1024\n",
" * 1024, # size in bytes per block, adjust depends on the file size but the recommended size is in the MB\n",
" * 1024, # size in bytes per block, adjust depends on the file size but the recommended size range is in the MB\n",
"}\n",
"\n",
"# Note that if we use a context, we'll close the file after the block so operations on xarray may fail if we don't load our data arrays.\n",
"with fs.open(uri, **fsspec_caching) as file:\n",
" ds = xr.open_dataset(file, engine=\"h5netcdf\")\n",
" mean = ds.sst.mean()\n",
"# we are not using a context, we can use ds until we manually close it.\n",
"fo = fs.open(uri, **fsspec_better_caching)\n",

Check failure on line 323 in intermediate/remote_data/remote-data.ipynb

View workflow job for this annotation

GitHub Actions / quality-control

fo ==> of, for, to, do, go
"ds = xr.open_dataset(fo, engine=\"h5netcdf\")\n",

Check failure on line 324 in intermediate/remote_data/remote-data.ipynb

View workflow job for this annotation

GitHub Actions / quality-control

fo ==> of, for, to, do, go
"better_cache_info = fo.cache\n",

Check failure on line 325 in intermediate/remote_data/remote-data.ipynb

View workflow job for this annotation

GitHub Actions / quality-control

fo ==> of, for, to, do, go
"ds"
]
},
{
"cell_type": "markdown",
"id": "acf8df24-0af1-408d-bfb0-3d1ddb4c61d3",
"metadata": {},
"source": [
"#### Comparing performance\n",
"\n",
"Since `v2024.05` we can inspect fsspec file-like objects' cache to measure their I/O performance. We'll notice that using a different implementation (this is not read-ahead) we managed to reduce the total requested bytes and since we can control the page buffer size the total request also decreases. "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "66eef1e6-cce3-4952-9703-52589e353a53",
"metadata": {},
"outputs": [],
"source": [
"print(default_cache_info, better_cache_info)"
]
},
{
"cell_type": "markdown",
"id": "17",
Expand All @@ -303,7 +355,10 @@
"\n",
"So far we have only used HTTP to access a remote file, however the commercial cloud has their own implementations with specific features. fsspec allows us to talk to different cloud storage implementations hiding these details from us and the libraries we use. Now we are going to access the same file using the S3 protocol. \n",
"\n",
"> Note: S3, Azure blob, etc all have their names and prefixes but under the hood they still work with the HTTP protocol.\n"
"> Note: S3, Azure blob, etc all have their names and prefixes but under the hood they still work with the HTTP protocol.\n",
"\n",
"\n",
"Let's now open our file with a backend that understands NetCDF and fsspec to abstract remote I/O to cloud storage, but using the default caching.\n"
]
},
{
Expand All @@ -319,18 +374,33 @@
"# If we need to pass credentials to our remote storage we can do it here, in this case this is a public bucket\n",
"fs = fsspec.filesystem('s3', anon=True)\n",
"\n",
"fsspec_caching = {\n",
"fsspec_better_caching = {\n",
" \"cache_type\": \"blockcache\", # block cache stores blocks of fixed size and uses eviction using a LRU strategy.\n",
" \"block_size\": 8\n",
" * 1024\n",
" * 1024, # size in bytes per block, adjust depends on the file size but the recommended size is in the MB\n",
"}\n",
"\n",
"# we are not using a context, we can use ds until we manually close it.\n",
"ds = xr.open_dataset(fs.open(uri, **fsspec_caching), engine=\"h5netcdf\")\n",
"fo = fs.open(uri, **fsspec_better_caching)\n",

Check failure on line 385 in intermediate/remote_data/remote-data.ipynb

View workflow job for this annotation

GitHub Actions / quality-control

fo ==> of, for, to, do, go
"ds = xr.open_dataset(fo, engine=\"h5netcdf\")\n",

Check failure on line 386 in intermediate/remote_data/remote-data.ipynb

View workflow job for this annotation

GitHub Actions / quality-control

fo ==> of, for, to, do, go
"ds"
]
},
{
"cell_type": "markdown",
"id": "5a3cc3cf-8d75-4107-b897-c98211ccdd57",
"metadata": {},
"source": [
"#### Remote data access and chunking\n",
"\n",
"One last but important consideration when we access remote data is that we should be aware of the chunking. Internal chunking of data affects how fast xarray engines can get data out of our files. If the chunk size is too small there is a considerable performance penalty, a deep dive into this topic is out of scope for this notebook but here is a list of good resources to better understand it.\n",
"\n",
"* Unidata's [Chunking Data: Why it Matters](https://www.unidata.ucar.edu/blogs/developer/entry/chunking_data_why_it_matters) article.\n",
"* [Chunking in HDF5](https://davis.lbl.gov/Manuals/HDF5-1.8.7/Advanced/Chunking/index.html)\n",
"* [HDF5 chunking tutorial](https://www.youtube.com/watch?v=0HbL-0cqkPo) "
]
},
{
"cell_type": "markdown",
"id": "19",
Expand Down Expand Up @@ -360,6 +430,11 @@
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
Expand All @@ -369,7 +444,8 @@
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3"
"pygments_lexer": "ipython3",
"version": "3.10.13"
}
},
"nbformat": 4,
Expand Down

0 comments on commit 83d97cb

Please sign in to comment.