diff --git a/doc/source/getting_started/example_notebooks/ICESat-2_DAAC_DataAccess2_Subsetting.ipynb b/doc/source/getting_started/example_notebooks/ICESat-2_DAAC_DataAccess2_Subsetting.ipynb index ceeb7194a..d5a5f62c3 100644 --- a/doc/source/getting_started/example_notebooks/ICESat-2_DAAC_DataAccess2_Subsetting.ipynb +++ b/doc/source/getting_started/example_notebooks/ICESat-2_DAAC_DataAccess2_Subsetting.ipynb @@ -4,15 +4,15 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "# Subsetting ICESat-2 Data with the NSIDC Subsetter\n", - "## How to Use the NSIDC Subsetter Example Notebook\n", + "## Subsetting ICESat-2 Data with the NSIDC Subsetter\n", + "### How to Use the NSIDC Subsetter Example Notebook\n", "This notebook illustrates the use of icepyx for subsetting ICESat-2 data ordered through the NSIDC DAAC. We'll show how to find out what subsetting options are available and how to specify the subsetting options for your order.\n", "\n", "For more information on using icepyx to find, order, and download data, see our complimentary [ICESat-2_DAAC_DataAccess_Example Notebook](https://github.com/icesat2py/icepyx/blob/main/doc/examples/ICESat-2_DAAC_DataAccess_Example.ipynb).\n", "\n", "Questions? Be sure to check out the FAQs throughout this notebook, indicated as italic headings.\n", "\n", - "### Credits\n", + "#### Credits\n", "* notebook contributors: Zheng Liu, Jessica Scheick, and Amy Steiker\n", "* some source material: [NSIDC Data Access Notebook](https://github.com/ICESAT-2HackWeek/ICESat2_hackweek_tutorials/tree/main/03_NSIDCDataAccess_Steiker) by Amy Steiker and Bruce Wallin" ] @@ -21,7 +21,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## _What is SUBSETTING anyway?_\n", + "### _What is SUBSETTING anyway?_\n", "\n", "Anyone who's worked with geospatial data has probably encountered subsetting. Typically, we search for data wherever it is stored and download the chunks (aka granules, scenes, passes, swaths, etc.) that contain something we are interested in. Then, we have to extract from each chunk the pieces we actually want to analyze. Those pieces might be geospatial (i.e. an area of interest), temporal (i.e. certain months of a time series), and/or certain variables. This process of extracting the data we are going to use is called subsetting.\n", "\n", @@ -32,7 +32,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Import packages, including icepyx" + "### Import packages, including icepyx" ] }, { @@ -56,7 +56,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Create a query object and log in to Earthdata\n", + "### Create a query object and log in to Earthdata\n", "\n", "For this example, we'll be working with a sea ice product (ATL09) for an area along West Greenland (Disko Bay)." ] @@ -84,7 +84,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Discover Subsetting Options\n", + "### Discover Subsetting Options\n", "\n", "You can see what subsetting options are available for a given product by calling `show_custom_options()`. The options are presented as a series of headings followed by available values in square brackets. Headings are:\n", "* **Subsetting Options**: whether or not temporal and spatial subsetting are available for the data product\n", @@ -119,7 +119,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## _Why do I have to provide spatial bounds to icepyx even if I don't use them to subset my data order?_\n", + "### _Why do I have to provide spatial bounds to icepyx even if I don't use them to subset my data order?_\n", "\n", "Because they're still needed for the granule level search.\n", "Spatial inputs are usually required for any data search, on any platform, even if your search parameters cover the entire globe.\n", @@ -133,7 +133,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## About Data Variables in a query object\n", + "### About Data Variables in a query object\n", "\n", "A given ICESat-2 product may have over 200 variable + path combinations.\n", "icepyx includes a custom `Variables` module that is \"aware\" of the ATLAS sensor and how the ICESat-2 data products are stored.\n", @@ -146,7 +146,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Determine what variables are available for your data product\n", + "### Determine what variables are available for your data product\n", "There are multiple ways to get a complete list of available variables.\n", "To increase readability, some display options (2 and 3, below) show the 200+ variable + path combinations as a dictionary where the keys are variable names and the values are the paths to that variable.\n", "\n", @@ -184,7 +184,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## _Why not just download all the data and subset locally? What if I need more variables/granules?_\n", + "### _Why not just download all the data and subset locally? What if I need more variables/granules?_\n", "\n", "Taking advantage of the NSIDC subsetter is a great way to reduce your download size and thus your download time and the amount of storage required, especially if you're storing your data locally during analysis. By downloading your data using icepyx, it is easy to go back and get additional data with the same, similar, or different parameters (e.g. you can keep the same spatial and temporal bounds but change the variable list). Related tools (e.g. [`captoolkit`](https://github.com/fspaolo/captoolkit)) will let you easily merge files if you're uncomfortable merging them during read-in for processing." ] @@ -193,7 +193,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Building the default wanted variable list" + "### Building the default wanted variable list" ] }, { @@ -219,7 +219,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Applying variable subsetting to your order and download\n", + "### Applying variable subsetting to your order and download\n", "\n", "In order to have your wanted variable list included with your order, you must pass it as a keyword argument to the `subsetparams()` attribute or the `order_granules()` or `download_granules()` (which calls `order_granules` under the hood if you have not already placed your order) functions." ] @@ -267,7 +267,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## _Why does the subsetter say no matching data was found?_\n", + "### _Why does the subsetter say no matching data was found?_\n", "Sometimes, granules (\"files\") returned in our initial search end up not containing any data in our specified area of interest.\n", "This is because the initial search is completed using summary metadata for a granule.\n", "You've likely encountered this before when viewing available imagery online: your spatial search turns up a bunch of images with only a few border or corner pixels, maybe even in no data regions, in your area of interest.\n", @@ -278,7 +278,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Check the variable list in your downloaded file\n", + "### Check the variable list in your downloaded file\n", "\n", "Compare the available variables associated with the full product relative to those in your downloaded data file." ] @@ -298,7 +298,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "### Check the downloaded data\n", + "#### Check the downloaded data\n", "Get all `latitude` variables in your downloaded file:" ] }, @@ -328,7 +328,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "### Compare to the variable paths available in the original data" + "#### Compare to the variable paths available in the original data" ] }, { diff --git a/doc/source/getting_started/example_notebooks/ICESat-2_DAAC_DataAccess_Example.ipynb b/doc/source/getting_started/example_notebooks/ICESat-2_DAAC_DataAccess_Example.ipynb index 16eda5f7a..6cd8108cd 100644 --- a/doc/source/getting_started/example_notebooks/ICESat-2_DAAC_DataAccess_Example.ipynb +++ b/doc/source/getting_started/example_notebooks/ICESat-2_DAAC_DataAccess_Example.ipynb @@ -4,12 +4,12 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "# Accessing ICESat-2 Data\n", - "## Data Query and Basic Download Example Notebook\n", + "## Accessing ICESat-2 Data\n", + "### Data Query and Basic Download Example Notebook\n", "This notebook illustrates the use of icepyx for ICESat-2 data access and download from the NASA NSIDC DAAC (NASA National Snow and Ice Data Center Distributed Active Archive Center).\n", "A complimentary notebook demonstrates in greater detail the [subsetting](https://github.com/icesat2py/icepyx/blob/main/doc/examples/ICESat-2_DAAC_DataAccess2_Subsetting.ipynb) options available when ordering data.\n", "\n", - "### Credits\n", + "#### Credits\n", "* original notebook by: Jessica Scheick\n", "* notebook contributors: Amy Steiker and Tyler Sutterley\n", "* source material: [NSIDC Data Access Notebook](https://github.com/ICESAT-2HackWeek/ICESat2_hackweek_tutorials/tree/master/03_NSIDCDataAccess_Steiker) by Amy Steiker and Bruce Wallin and [2020 Hackweek Data Access Notebook](https://github.com/ICESAT-2HackWeek/2020_ICESat-2_Hackweek_Tutorials/blob/main/06-07.Data_Access/02-Data_Access_rendered.ipynb) by Jessica Scheick and Amy Steiker" @@ -19,7 +19,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Import packages, including icepyx" + "### Import packages, including icepyx" ] }, { @@ -38,7 +38,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Quick-Start\n", + "### Quick-Start\n", "\n", "The entire process of getting ICESat-2 data (from query to download) can ultimately be accomplished in three minimal lines of code:\n", "\n", @@ -57,7 +57,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Key Steps for Programmatic Data Access\n", + "### Key Steps for Programmatic Data Access\n", "\n", "There are several key steps for accessing data from the NSIDC API:\n", "1. Define your parameters (spatial, temporal, dataset, etc.)\n", @@ -74,7 +74,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Create an ICESat-2 data object with the desired search parameters\n", + "### Create an ICESat-2 data object with the desired search parameters\n", "\n", "There are three required inputs, depending on how you want to search for data. Two are required in all cases:\n", "- `short_name` = the data product of interest, known as its \"short name\".\n", @@ -269,7 +269,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Built in methods allow us to get more information about our data product\n", + "### Built in methods allow us to get more information about our data product\n", "In addition to viewing the stored object information shown above (e.g. product short name, start and end date and time, version, etc.), we can also request summary information about the data product itself or confirm that we have manually specified the latest version." ] }, @@ -305,7 +305,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Querying a data product\n", + "### Querying a data product\n", "In order to search the product collection for available data granules, we need to build our search parameters. This is done automatically behind the scenes when you run `region_a.avail_granules()`, but you can also build and view them by calling `region_a.CMRparams`. These are formatted as a dictionary of key:value pairs according to the [CMR documentation](https://cmr.earthdata.nasa.gov/search/site/docs/search/api.html)." ] }, @@ -366,7 +366,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Log in to NASA Earthdata\n", + "### Log in to NASA Earthdata\n", "In order to download any data from NSIDC, we must first authenticate ourselves using a valid Earthdata login. This will create a valid token to interface with the DAAC as well as start an active logged-in session to enable data download. Once you have successfully logged in for a given query instance, the token and session will be passed behind the scenes as needed for you to order and download data. Passwords are entered but not shown or stored in plain text by the system.\n", "\n", "There are multiple ways to provide your Earthdata credentials via icepyx.\n", @@ -388,7 +388,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Additional Parameters and Subsetting\n", + "### Additional Parameters and Subsetting\n", "\n", "Once we have generated our session, we must build the required configuration parameters needed to actually download data. These will tell the system how we want to download the data. As with the CMR search parameters, these will be built automatically when you run `region_a.order_granules()`, but you can also create and view them with `region_a.reqparams`. The default parameters, given below, should work for most users.\n", "- `page_size` = 2000. This is the number of granules we will request per order.\n", @@ -397,7 +397,7 @@ "- `agent` = 'NO'\n", "- `include_meta` = 'Y'\n", "\n", - "### More details about the configuration parameters\n", + "#### More details about the configuration parameters\n", "`request_mode` is \"asynchronous\" by default, which allows concurrent requests to be queued and processed without the need for a continuous connection between you and the API endpoint.\n", "In contrast, using a \"synchronous\" `request_mode` means that the request relies on a direct, continous connection between you and the API endpoint.\n", "Outputs are directly downloaded, or \"streamed\", to your working directory.\n", @@ -423,7 +423,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "### Subsetting\n", + "#### Subsetting\n", "\n", "In addition to the required parameters (CMRparams and reqparams) that are submitted with our order, for ICESat-2 data products we can also submit subsetting parameters to NSIDC.\n", "For a deeper dive into subsetting, please see our [Subsetting Tutorial Notebook](https://github.com/icesat2py/icepyx/blob/main/doc/examples/ICESat-2_DAAC_DataAccess2_Subsetting.ipynb), which covers subsetting in more detail, including how to get a list of subsetting options, how to build your list of subsetting parameters, and how to generate a list of desired variables (most datasets have more than 200 variable fields!), including using pre-built default lists (these lists are still in progress and we welcome contributions!).\n", @@ -463,7 +463,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "### Place the order\n", + "#### Place the order\n", "Then, we can send the order to NSIDC using the order_granules function. Information about the granules ordered and their status will be printed automatically. Status information can also be emailed to the address provided when the `email` kwarg is set to `True`. Additional information on the order, including request URLs, can be viewed by setting the optional keyword input 'verbose' to True." ] }, @@ -491,7 +491,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Download the order\n", + "### Download the order\n", "Finally, we can download our order to a specified directory (which needs to have a full path but doesn't have to point to an existing directory) and the download status will be printed as the program runs. Additional information is again available by using the optional boolean keyword `verbose`." ] }, diff --git a/doc/source/getting_started/example_notebooks/ICESat-2_DEM_comparison_Colombia_working.ipynb b/doc/source/getting_started/example_notebooks/ICESat-2_DEM_comparison_Colombia_working.ipynb index 2a0784aa3..c64de33b5 100644 --- a/doc/source/getting_started/example_notebooks/ICESat-2_DEM_comparison_Colombia_working.ipynb +++ b/doc/source/getting_started/example_notebooks/ICESat-2_DEM_comparison_Colombia_working.ipynb @@ -4,11 +4,11 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "# Comparing ICESat-2 Altimetry Elevations with DEM\n", - "## Example Notebook\n", + "## Comparing ICESat-2 Altimetry Elevations with DEM\n", + "### Example Notebook\n", "This notebook compares elevations from ICESat-2 to those from a DEM.\n", "\n", - "### Credits\n", + "#### Credits\n", "* notebook by: [Jessica Scheick](https://github.com/JessicaS11) and [Shashank Bhushan](https://github.com/ShashankBice)\n" ] }, @@ -16,9 +16,9 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "### Setup\n", - "#### The Notebook was run on ICESat2 Hackweek 2019 pangeo image\n", - "#### For full functionality,\n", + "#### Setup\n", + "##### The Notebook was run on ICESat2 Hackweek 2019 pangeo image\n", + "##### For full functionality,\n", "- Please install [icepyx](https://github.com/icesat2py/icepyx), [topolib](https://github.com/ICESAT-2HackWeek/topohack), [contextily](https://github.com/darribas/contextily) using `git clone xxxxx`, `pip install -e .` workflow (see below; **you must restart your kernel after installing the packages**)\n", "- Download [NASA ASP](https://github.com/NeoGeographyToolkit/StereoPipeline) tar ball and unzip, we execute the commands from the notebook, using the path to the untared bin folder for the given commands." ] @@ -58,7 +58,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "### ICESat-2 product being explored : [ATL08](https://nsidc.org/data/atl08)\n", + "#### ICESat-2 product being explored : [ATL08](https://nsidc.org/data/atl08)\n", "- Along track heights for canopy (land and vegitation) and terrain\n", "- Terrain heights provided are aggregated over every 100 m along track interval, output contains \"h_te_best_fit: height from best fit algorithm for all photons in the range\", median height and others. Here we use h_te_best_fit.\n", "- See this preliminary introduction and quality assessment [paper](https://www.mdpi.com/2072-4292/11/14/1721) for more detail" @@ -68,7 +68,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Import packages, including icepyx" + "### Import packages, including icepyx" ] }, { @@ -112,7 +112,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Preprocess #1\n", + "### Preprocess #1\n", "- Download using icepyx" ] }, @@ -120,7 +120,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "### Create an ICESat-2 data object with the desired search parameters\n", + "##### Create an ICESat-2 data object with the desired search parameters\n", "- See the ICESat-2 DAAC Data Access notebook for more details on downloading data from the NSIDC" ] }, @@ -138,7 +138,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Finding and downloading data\n", + "### Finding and downloading data\n", "In order to download any data from NSIDC, we must first authenticate ourselves using a valid Earthdata login (available for free on their website). This will create a valid token to interface with the DAAC as well as start an active logged-in session to enable data download. The token is attached to the data object and stored, but the session must be passed to the download function. Then we can order the granules." ] }, @@ -146,7 +146,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "### Log in to Earthdata" + "#### Log in to Earthdata" ] }, { @@ -185,7 +185,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "### Place the order" + "#### Place the order" ] }, { @@ -212,7 +212,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "### Download the order\n", + "#### Download the order\n", "Finally, we can download our order to a specified directory (which needs to have a full path but doesn't have to point to an existing directory) and the download status will be printed as the program runs. Additional information is again available by using the optional boolean keyword 'verbose'." ] }, @@ -239,7 +239,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "### Clean up the download folder by removing individual order folders:" + "#### Clean up the download folder by removing individual order folders:" ] }, { @@ -265,7 +265,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Preprocess #2\n", + "### Preprocess #2\n", "- Convert data into geopandas dataframe, which allows for doing basing geospatial opertaions" ] }, @@ -283,7 +283,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Examine content of 1 ATLO8 hdf file" + "### Examine content of 1 ATLO8 hdf file" ] }, { @@ -399,7 +399,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Convert the list of hdf5 files into more familiar Pandas Dataframe" + "### Convert the list of hdf5 files into more familiar Pandas Dataframe" ] }, { @@ -416,7 +416,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Preprocess #3\n", + "### Preprocess #3\n", "- Visualise data footprints" ] }, @@ -439,7 +439,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## We will use the TANDEM-X Global DEM for our comparison. The resolution of the globally avaialable product is 90 m, with *horizontal* and *vertical* accuracy better than 2 to 3 m.\n", + "### We will use the TANDEM-X Global DEM for our comparison. The resolution of the globally avaialable product is 90 m, with *horizontal* and *vertical* accuracy better than 2 to 3 m.\n", "- TANDEM-X DEM for the region was downloaded and preprocessed, filtered using scripts from the [tandemx](https://github.com/dshean/tandemx) repository" ] }, @@ -540,7 +540,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Section 1\n", + "### Section 1\n", "- This contains demonstration of elevation profile along 1 track, which has 6 beams" ] }, @@ -612,7 +612,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Section 2:\n", + "### Section 2:\n", "- Compare ICESat-2 Elevation with that of reference DEM (in this case TANDEM-X)" ] }, @@ -620,7 +620,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "### Sample elevations from DEM at ATLO8-locations using nearest neighbour algorithm " + "#### Sample elevations from DEM at ATLO8-locations using nearest neighbour algorithm " ] }, { @@ -645,7 +645,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "### Plot elevation differences (ICESat-2 minus TANDEM-X) as a function of elevation\n" + "#### Plot elevation differences (ICESat-2 minus TANDEM-X) as a function of elevation\n" ] }, { @@ -706,7 +706,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Section 3\n", + "### Section 3\n", "- Application of ICESat-2 as control surface for DEMs coregistration\n", "- Or, to find offsets and align ICESat-2 tracks to a control surface" ] @@ -715,14 +715,14 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Going fancy, include only if you want to :)" + "### Going fancy, include only if you want to :)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "### Application of ICESat-2 as control for DEM co-registration ?\n", + "#### Application of ICESat-2 as control for DEM co-registration ?\n", "- Can use point cloud alignment techniques to align DEMs to points, for now as a starting point we can use the transformation matrix to inform on the horizontal and vertical offset between ICESat-2 tracks and DEMs\n", "- We will be using a flavor of Iterative Closest Point alignment algorithm, implemented in [Ames Stereo Pipeline](https://github.com/NeoGeographyToolkit/StereoPipeline)" ] diff --git a/doc/source/getting_started/example_notebooks/ICESat-2_Data_Read-in_Example.ipynb b/doc/source/getting_started/example_notebooks/ICESat-2_Data_Read-in_Example.ipynb index 219573a80..2a8170bbc 100644 --- a/doc/source/getting_started/example_notebooks/ICESat-2_Data_Read-in_Example.ipynb +++ b/doc/source/getting_started/example_notebooks/ICESat-2_Data_Read-in_Example.ipynb @@ -5,14 +5,14 @@ "id": "552e9ef9", "metadata": {}, "source": [ - "# Reading ICESat-2 Data in for Analysis\n", - "## Example notebook to showcase ICESat-2 data read-in using icepyx\n", + "## Reading ICESat-2 Data in for Analysis\n", + "### Example notebook to showcase ICESat-2 data read-in using icepyx\n", "This notebook illustrates the use of icepyx for reading ICESat-2 data files, loading them into a data object.\n", "Currently the default data object is an Xarray Dataset, with ongoing work to provide support for other data object types.\n", "\n", "For more information on how to order and download ICESat-2 data, see the [icepyx data access tutorial](https://github.com/icesat2py/icepyx/blob/main/doc/examples/ICESat-2_DAAC_DataAccess_Example.ipynb).\n", "\n", - "## Motivation\n", + "### Motivation\n", "Most often, when you open a data file, you must specify the underlying data structure and how you'd like the information to be read in.\n", "A simple example of this, for instance when opening a csv or similarly delimited file, is letting the software know if the data contains a header row, what the data type is (string, double, float, boolean, etc.) for each column, what the delimeter is, and which columns or rows you'd like to be loaded.\n", "Many ICESat-2 data readers are quite manual in nature, requiring that you accurately type out a list of string paths to the various data variables.\n", @@ -20,13 +20,13 @@ "icepyx simplifies this process by relying on its awareness of ICESat-2 specific data file variable storage structure.\n", "Instead of needing to manually iterate through the beam pairs, you can provide a few options to the `Read` object and icepyx will do the heavy lifting for you (as detailed in this notebook).\n", "\n", - "## Approach\n", + "### Approach\n", "If you're interested in what's happening under the hood: icepyx turns your instructions into something called a catalog, then uses the Intake library and the catalog to actually load the data into memory. Specifically, icepyx creates an [Intake](https://intake.readthedocs.io/en/latest/) data [catalog](https://intake.readthedocs.io/en/latest/catalog.html) for each requested variable and then merges the read-in data from each of the variables to create a single data object.\n", "\n", "Intake catalogs are powerful (and the tool we selected) because they can be saved, shared, modified, and reused to reproducibly read in a set of data files in a consistent way as part of an analysis workflow.\n", "This approach streamlines the transition between data sources (local/downloaded files or, ultimately, cloud/bucket access) and data object types (e.g. [Xarray Dataset](http://xarray.pydata.org/en/stable/generated/xarray.Dataset.html) or [GeoPandas GeoDataFrame](https://geopandas.org/docs/reference/api/geopandas.GeoDataFrame.html)).\n", "\n", - "### Credits\n", + "#### Credits\n", "* original notebook by: Jessica Scheick\n", "* notebook contributors: Wei Ji and Tian\n", "* templates for default ICESat-2 Intake catalogs from: [Wei Ji](https://github.com/icesat2py/icepyx/issues/106) and [Tian](https://github.com/icetianli/ICESat2_xarray).\n" @@ -37,7 +37,7 @@ "id": "0d360de3", "metadata": {}, "source": [ - "## Import packages, including icepyx" + "### Import packages, including icepyx" ] }, { @@ -57,7 +57,7 @@ "source": [ "---------------------------------\n", "\n", - "## Quick Start Guide\n", + "### Quick Start Guide\n", "For those who might be looking into playing with this (but don't want all the details/explanations)" ] }, @@ -110,7 +110,7 @@ "metadata": {}, "source": [ "---------------------------------------\n", - "## Key steps for loading (reading) ICESat-2 data\n", + "### Key steps for loading (reading) ICESat-2 data\n", "\n", "Reading in ICESat-2 data with icepyx happens in a few simple steps:\n", "1. Let icepyx know where to find your data (this might be local files or urls to data in cloud storage)\n", @@ -127,7 +127,7 @@ "id": "9bf6d38c", "metadata": {}, "source": [ - "## Step 0: Get some data if you haven't already\n", + "### Step 0: Get some data if you haven't already\n", "Here are a few lines of code to get you set up with a few data files if you don't already have some on your local system." ] }, @@ -167,7 +167,7 @@ "id": "e8da42c1", "metadata": {}, "source": [ - "## Step 1: Set data source path\n", + "### Step 1: Set data source path\n", "\n", "Provide a full path to the data to be read in (i.e. opened).\n", "Currently accepted inputs are:\n", @@ -217,7 +217,7 @@ "id": "92743496", "metadata": {}, "source": [ - "## Step 2: Create a filename pattern for your data files\n", + "### Step 2: Create a filename pattern for your data files\n", "\n", "Files provided by NSIDC typically match the format `\"ATL{product:2}_{datetime:%Y%m%d%H%M%S}_{rgt:4}{cycle:2}{orbitsegment:2}_{version:3}_{revision:2}.h5\"` where the parameters in curly brackets indicate a parameter name (left of the colon) and character length or format (right of the colon).\n", "Some of this information is used during data opening to help correctly read and label the data within the data structure, particularly when multiple files are opened simultaneously.\n", @@ -263,7 +263,7 @@ "id": "4275b04c", "metadata": {}, "source": [ - "## Step 3: Create an icepyx read object\n", + "### Step 3: Create an icepyx read object\n", "\n", "The `Read` object has two required inputs:\n", "- `path` = a string with the full file path or full directory path to your hdf5 (.h5) format files.\n", @@ -301,7 +301,7 @@ "id": "da8d8024", "metadata": {}, "source": [ - "## Step 4: Specify variables to be read in\n", + "### Step 4: Specify variables to be read in\n", "\n", "To load your data into memory or prepare it for analysis, icepyx needs to know which variables you'd like to read in.\n", "If you've used icepyx to download data from NSIDC with variable subsetting (which is the default), then you may already be familiar with the icepyx `Variables` module and how to create and modify lists of variables.\n", @@ -396,7 +396,7 @@ "id": "473de4d7", "metadata": {}, "source": [ - "## Step 5: Loading your data\n", + "### Step 5: Loading your data\n", "\n", "Now that you've set up all the options, you're ready to read your ICESat-2 data into memory!" ] @@ -405,7 +405,9 @@ "cell_type": "code", "execution_count": null, "id": "eaabc976", - "metadata": {}, + "metadata": { + "scrolled": false + }, "outputs": [], "source": [ "ds = reader.load()" @@ -438,7 +440,7 @@ "id": "b1d7de2d", "metadata": {}, "source": [ - "## On to data analysis!\n", + "### On to data analysis!\n", "\n", "From here, you can begin your analysis.\n", "Ultimately, icepyx aims to include an Xarray extension with ICESat-2 aware functions that allow you to do things like easily use only data from strong beams.\n", @@ -471,7 +473,7 @@ "id": "6edfbb25", "metadata": {}, "source": [ - "## More on Intake catalogs and the read object\n", + "### More on Intake catalogs and the read object\n", "\n", "As anyone familiar with ICESat-2 hdf5 files knows, one of the challenges to reading in data is looping through all of the beam pairs for each track.\n", "The icepyx read module takes advantage of icepyx's variables module, which has some awareness of ICESat-2 data and uses that to save the user the trouble of having to loop through each beam pair.\n", @@ -484,7 +486,7 @@ "id": "0f0076f9", "metadata": {}, "source": [ - "### Viewing the template catalog\n", + "#### Viewing the template catalog\n", "\n", "You can access the ICESat-2 catalog template as an attribute of the read object.\n", "\n", @@ -518,7 +520,7 @@ "id": "fef43556", "metadata": {}, "source": [ - "### Use an existing catalog\n", + "#### Use an existing catalog\n", "If you already have a catalog for your data, you can supply that when you create the read object." ] }, @@ -577,7 +579,7 @@ "id": "d56fc41c", "metadata": {}, "source": [ - "### More customization options\n", + "#### More customization options\n", "\n", "If you'd like to use the icepyx ICESat-2 Catalog template to create your own customized catalog, we recommend that you access the `build_catalog` function directly, which returns an Intake Catalog instance.\n", "\n", @@ -619,7 +621,7 @@ "id": "bab9c949", "metadata": {}, "source": [ - "### Saving your catalog\n", + "#### Saving your catalog\n", "If you create a highly customized ICESat-2 catalog, you can use Intake's `save` to export it as a .yml file.\n", "\n", "Don't forget you can easily use an existing catalog (such as this highly customized one you just made) to read in your data with `reader = ipx.Read(filepath, pattern, catalog)` (so it's as easy as re-creating your reader object with your modified catalog)." diff --git a/doc/source/getting_started/example_notebooks/ICESat-2_Data_Visualization_Example.ipynb b/doc/source/getting_started/example_notebooks/ICESat-2_Data_Visualization_Example.ipynb index ccc9ba278..ba1c1b012 100644 --- a/doc/source/getting_started/example_notebooks/ICESat-2_Data_Visualization_Example.ipynb +++ b/doc/source/getting_started/example_notebooks/ICESat-2_Data_Visualization_Example.ipynb @@ -5,12 +5,12 @@ "id": "1e29ff05", "metadata": {}, "source": [ - "# Visualizing ICESat-2 Data\n", - "## Elevation Visualization Example Notebook\n", + "## Visualizing ICESat-2 Data\n", + "### Elevation Visualization Example Notebook\n", "\n", "This notebook demonstrates interactive ICESat-2 elevation visualization by requesting data from [OpenAltimetry](https://www.openaltimetry.org/) based on metadata provided by [icepyx](https://icepyx.readthedocs.io/en/latest/). We will show how to plot spatial extent and elevation interactively. \n", "\n", - "### Credits\n", + "#### Credits\n", "* Notebook by: [Tian Li](https://github.com/icetianli), [Jessica Scheick](https://github.com/JessicaS11) and \n", "[Wei Ji](https://github.com/weiji14)\n", "* Source material: [READ_ATL06_DEM Notebook](https://github.com/ICESAT-2HackWeek/Assimilation/blob/master/contributors/icetianli/READ_ATL06_DEM.ipynb) by Tian Li and [Friedrich Knuth](https://github.com/friedrichknuth)" @@ -21,7 +21,7 @@ "id": "6333399a", "metadata": {}, "source": [ - "## Import packages" + "### Import packages" ] }, { @@ -39,7 +39,7 @@ "id": "57f2cfd8", "metadata": {}, "source": [ - "## Create an ICESat-2 query object\n", + "### Create an ICESat-2 query object\n", "Set the desired parameters for your data visualization.\n", "\n", "For details on minimum required inputs, please refer to [ICESat-2_DAAC_DataAccess_Example](https://github.com/icesat2py/icepyx/blob/main/examples/ICESat-2_DAAC_DataAccess_Example.ipynb). If you are using a spatial extent input other than a bounding box for your search, it will automatically be converted to a bounding box for the purposes of visualization ONLY (your query object will not be affected)." @@ -126,7 +126,7 @@ "id": "1b178836", "metadata": {}, "source": [ - "## Visualize spatial extent \n", + "### Visualize spatial extent \n", "By calling function `visualize_spatial_extent`, it will plot the spatial extent in red outline overlaid on a basemap, try zoom-in/zoom-out to see where is your interested region and what the geographic features look like in this region." ] }, @@ -147,9 +147,9 @@ "id": "71ca513d", "metadata": {}, "source": [ - "## Visualize ICESat-2 elevation using OpenAltimetry API\n", + "### Visualize ICESat-2 elevation using OpenAltimetry API\n", "\n", - "### **Note: this function currently only supports products `ATL06, ATL07, ATL08, ATL10, ATL12, ATL13`**\n", + "#### **Note: this function currently only supports products `ATL06, ATL07, ATL08, ATL10, ATL12, ATL13`**\n", "\n", "Now that we have produced an interactive map showing the spatial extent of ICESat-2 data to be requested from NSIDC using icepyx, what if we want to have a quick check on the ICESat-2 elevations we plan to download from NSIDC? [OpenAltimetry API](https://openaltimetry.org/data/swagger-ui/#/) provides a nice way to achieve this. By sending metadata (product, date, bounding box, trackId) of each ICESat-2 file to the API, it can return elevation data almost instantaneously. The major drawback is requests are limited to 5x5 degree spatial bounding box selection for most of the ICESat-2 L3A products [ATL06, ATL07, ATL08, ATL10, ATL12, ATL13](https://icesat-2.gsfc.nasa.gov/science/data-products). To solve this issue, if you input spatial extent exceeds the 5 degree maximum in either horizontal dimension, your input spatial extent will be splited into 5x5 degree lat/lon grids first, use icepyx to query the metadata of ICESat-2 files located in each grid, and send each request to OpenAltimetry. Data sampling rates are 1/50 for ATL06 and 1/20 for other products.\n", "\n", @@ -174,7 +174,7 @@ "id": "9ee72a5c", "metadata": {}, "source": [ - "### Plot elevation for individual RGT\n", + "#### Plot elevation for individual RGT\n", "\n", "The visualization tool also provides the option to view elevation data by latitude for each ground track." ] @@ -194,7 +194,7 @@ "id": "b7082edd", "metadata": {}, "source": [ - "## Move on to data downloading from NSIDC if these are the products of interest\n", + "### Move on to data downloading from NSIDC if these are the products of interest\n", "\n", "For more details on the data ordering and downloading process, see [ICESat-2_DAAC_DataAccess_Example](https://github.com/icesat2py/icepyx/blob/main/examples/ICESat-2_DAAC_DataAccess_Example.ipynb)" ] @@ -223,7 +223,7 @@ "id": "textile-casting", "metadata": {}, "source": [ - "## Alternative Access Options to Visualize ICESat-2 elevation using OpenAltimetry API\n", + "### Alternative Access Options to Visualize ICESat-2 elevation using OpenAltimetry API\n", "\n", "You can also view elevation data by importing the visualization module directly and initializing it with your query object or a list of parameters:\n", " ```\n", diff --git a/doc/source/getting_started/example_notebooks/ICESat-2_cloud_data_access_example.ipynb b/doc/source/getting_started/example_notebooks/ICESat-2_cloud_data_access_example.ipynb index e59b58b94..c4d4d35ff 100644 --- a/doc/source/getting_started/example_notebooks/ICESat-2_cloud_data_access_example.ipynb +++ b/doc/source/getting_started/example_notebooks/ICESat-2_cloud_data_access_example.ipynb @@ -4,17 +4,17 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "# ICESat-2 AWS cloud data access with icepyx (BETA ONLY)\n", - "## Utilizing icepyx capabilities to enable cloud data access\n", + "## ICESat-2 AWS cloud data access with icepyx (BETA ONLY)\n", + "### Utilizing icepyx capabilities to enable cloud data access\n", "This notebook illustrates the use of icepyx for access ICESat-2 data currently available through the AWS (Amazon Web Services) us-west2 hub s3 data bucket.\n", "\n", - "## Critical Caveats\n", + "### Critical Caveats\n", "***Please do not contact us saying this does not work until you have read this section in detail***\n", "1. ICESat-2 data is not currently publicly available on the cloud (and will not likely be until at least the end of 2021). A limited subset is currently available in an s3 bucket to developers and beta testers who have been registered with NSIDC.\n", "2. This example and the code it describes are part of ongoing development. Current limitations to using these features are described throughout the example, as appropriate.\n", "3. You **MUST** be working within an AWS instance. Otherwise, you will get a permissions error.\n", "\n", - "### Credits\n", + "#### Credits\n", "* notebook by: Jessica Scheick\n", "* source material: [is2-nsidc-cloud.py](https://gist.github.com/bradlipovsky/80ab6a7aff3d3524b9616a9fc176065e#file-is2-nsidc-cloud-py-L28) by Brad Lipovsky" ] @@ -34,7 +34,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Create an icepyx Query object\n", + "### Create an icepyx Query object\n", "In order to develop and test cloud data access functionality, here we search for an arbitrary granule over Greenland that was previously determined to be available on s3 using [Earthdata Search](https://search.earthdata.nasa.gov/). s3 availability is not yet included in CMR metadata, so it cannot be determined programmatically." ] }, @@ -64,7 +64,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Construct the granule s3 urls\n", + "### Construct the granule s3 urls\n", "Since cloud data available is not yet included as part of the standard granule metadata, there is no way for us to check whether or not these s3 bucket urls are valid, since they are constructed from other granule metadata. Thus, you may get FileNotFound Errors when trying to use these urls." ] }, @@ -82,7 +82,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Log in to Earthdata and generate an s3 token\n", + "### Log in to Earthdata and generate an s3 token\n", "You can use icepyx's existing login functionality to generate your s3 data access token, which should be good for five hours. We currently do not have this set up to automatically renew, but if you're interested in adding this functionality please get in touch or submit a PR!" ] }, @@ -110,7 +110,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Set up your s3 access using your credentials" + "### Set up your s3 access using your credentials" ] }, { @@ -137,7 +137,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Select an s3 url and access the data\n", + "### Select an s3 url and access the data\n", "Development is underway for data read in capabilities, which will include options for cloud data access. Stay tuned and we'd love for you to join us and contribute!\n", "\n", "**Note: If you get a PermissionDenied Error when trying to read in the data, you may not be sending your request from an AWS hub in us-west2. We're currently working on how to alert users if they will not be able to access ICESat-2 data in the cloud for this reason**" diff --git a/doc/source/getting_started/example_notebooks/Working_with_ICESat-2_Data_Variables.ipynb b/doc/source/getting_started/example_notebooks/Working_with_ICESat-2_Data_Variables.ipynb index ba5b43a34..d25b6ebbd 100644 --- a/doc/source/getting_started/example_notebooks/Working_with_ICESat-2_Data_Variables.ipynb +++ b/doc/source/getting_started/example_notebooks/Working_with_ICESat-2_Data_Variables.ipynb @@ -4,8 +4,8 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "# Working with ICESat-2's Nested Variables\n", - "## Get a list of available variables and choose the ones you want to work with\n", + "## Working with ICESat-2's Nested Variables\n", + "### Get a list of available variables and choose the ones you want to work with\n", "\n", "This notebook illustrates the use of icepyx for managing lists of available and wanted ICESat-2 data variables.\n", "The two use cases for variable management within your workflow are:\n", @@ -22,7 +22,7 @@ "\n", "Questions? Be sure to check out the FAQs throughout this notebook, indicated as italic headings.\n", "\n", - "### Credits\n", + "#### Credits\n", "* based on the subsetting notebook by: Jessica Scheick and Zheng Liu" ] }, @@ -30,7 +30,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## _Why do ICESat-2 products need a custom variable manager?_\n", + "### _Why do ICESat-2 products need a custom variable manager?_\n", "\n", "It can be confusing and cumbersome to comb through the 200+ variable and path combinations contained in ICESat-2 data products.\n", "The icepyx `Variables` module makes it easier for users to quickly find and extract the specific variables they would like to work with across multiple beams, keywords, and variables and provides reader-friendly formatting to browse variables.\n", @@ -42,7 +42,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "### Some technical details about the Variables module\n", + "#### Some technical details about the Variables module\n", "For those eager to push the limits or who want to know more implementation details...\n", "\n", "The only required input to the `Variables` module is `vartype`.\n", @@ -55,7 +55,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Import packages, including icepyx" + "### Import packages, including icepyx" ] }, { @@ -72,7 +72,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Interacting with ICESat-2 Data Variables\n", + "### Interacting with ICESat-2 Data Variables\n", "\n", "Each variables instance (which is actually an associated Variables class object) contains two variable list attributes.\n", "One is the list of possible or available variables (`avail` attribute) and is unmutable, or unchangeable, as it is based on the input product specifications or files.\n", @@ -136,13 +136,13 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## ICESat-2 data variables\n", + "### ICESat-2 data variables\n", "\n", "ICESat-2 data is natively stored in a nested file format called hdf5.\n", "Much like a directory-file system on a computer, each variable (file) has a unique path through the heirarchy (directories) within the file.\n", "Thus, some variables (e.g. `'latitude'`, `'longitude'`) have multiple paths (one for each of the six beams in most products).\n", "\n", - "## Determine what variables are available\n", + "### Determine what variables are available\n", "`region_a.order_vars.avail` will return a list of all valid path+variable strings." ] }, @@ -192,7 +192,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Building your wanted variable list\n", + "### Building your wanted variable list\n", "\n", "Now that you know which variables and path components are available, you need to build a list of the ones you'd like included.\n", "There are several options for generating your initial list as well as modifying it, giving the user complete control.\n", @@ -246,7 +246,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Modifying your wanted variable list\n", + "### Modifying your wanted variable list\n", "\n", "Generating and modifying your variable request list, which is stored in `region_a.order_vars.wanted`, is controlled by the `append` and `remove` functions that operate on `region_a.order_vars.wanted`. The input options to `append` are as follows (the full documentation for this function can be found by executing `help(region_a.order_vars.append)`).\n", "* `defaults` (default False) - include the default variable list for your product (not yet fully implemented for all products; please submit your default variable list for inclusion!)\n", @@ -275,7 +275,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Examples (Overview)\n", + "### Examples (Overview)\n", "Below are a series of examples to show how you can use `append` and `remove` to modify your wanted variable list.\n", "For clarity, `region_a.order_vars.wanted` is cleared at the start of many examples.\n", "However, multiple `append` and `remove` commands can be called in succession to build your wanted variable list (see Examples 3+).\n", @@ -291,7 +291,7 @@ "metadata": {}, "source": [ "------------------\n", - "## Example Track 1 (Land Ice - run with ATL06 dataset)\n", + "### Example Track 1 (Land Ice - run with ATL06 dataset)\n", "\n", "### Example 1.1: choose variables\n", "Add all `latitude` and `longitude` variables across all six beam groups. Note that the additional required variables for time and spacecraft orientation are included by default." @@ -300,7 +300,9 @@ { "cell_type": "code", "execution_count": null, - "metadata": {}, + "metadata": { + "scrolled": false + }, "outputs": [], "source": [ "region_a.order_vars.append(var_list=['latitude','longitude'])\n", @@ -495,7 +497,7 @@ "metadata": {}, "source": [ "------------------\n", - "## Example Track 2 (Atmosphere - run with ATL09 dataset commented out at the start of the notebook)\n", + "### Example Track 2 (Atmosphere - run with ATL09 dataset commented out at the start of the notebook)\n", "\n", "### Example 2.1: choose variables\n", "Add all `latitude` and `longitude` variables" @@ -691,7 +693,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Using your wanted variable list\n", + "### Using your wanted variable list\n", "\n", "Now that you have your wanted variables list, you need to use it within your icepyx object (`Query` or `Read`) will automatically use it. " ] @@ -700,7 +702,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "### With a `Query` object\n", + "#### With a `Query` object\n", "In order to have your wanted variable list included with your order, you must pass it as a keyword argument to the `subsetparams()` attribute or the `order_granules()` or `download_granules()` (which calls `order_granules` under the hood if you have not already placed your order) functions." ] }, @@ -747,7 +749,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "### With a `Read` object\n", + "#### With a `Read` object\n", "Calling the `load()` method on your `Read` object will automatically look for your wanted variable list and use it.\n", "Please see the [read-in example Jupyter Notebook](https://github.com/icesat2py/icepyx/blob/main/doc/examples/ICESat-2_Data_Read-in_Example.ipynb) for a complete example of this usage.\n" ] diff --git a/doc/source/tracking/pypistats/get_pypi_stats.ipynb b/doc/source/tracking/pypistats/get_pypi_stats.ipynb index 1170766a0..3a719e27c 100644 --- a/doc/source/tracking/pypistats/get_pypi_stats.ipynb +++ b/doc/source/tracking/pypistats/get_pypi_stats.ipynb @@ -4,7 +4,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "# icepyx PyPI Statistics\n", + "## icepyx PyPI Statistics\n", "Use PyPIStats library to get data on PyPI downloads of icepyx (or any other package)\n", "\n", "See the [pypistats website](https://github.com/hugovk/pypistats) for potential calls, options, and formats (e.g. markdown, rst, html, json, numpy, pandas)\n",