Skip to content

Commit

Permalink
Revert "Increase jupyter notebook section heading levels"
Browse files Browse the repository at this point in the history
This reverts commit db480e7.
  • Loading branch information
JessicaS11 committed Dec 21, 2021
1 parent e478588 commit 77a93b1
Show file tree
Hide file tree
Showing 8 changed files with 123 additions and 119 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -4,15 +4,15 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Subsetting ICESat-2 Data with the NSIDC Subsetter\n",
"## How to Use the NSIDC Subsetter Example Notebook\n",
"## Subsetting ICESat-2 Data with the NSIDC Subsetter\n",
"### How to Use the NSIDC Subsetter Example Notebook\n",
"This notebook illustrates the use of icepyx for subsetting ICESat-2 data ordered through the NSIDC DAAC. We'll show how to find out what subsetting options are available and how to specify the subsetting options for your order.\n",
"\n",
"For more information on using icepyx to find, order, and download data, see our complimentary [ICESat-2_DAAC_DataAccess_Example Notebook](https://github.com/icesat2py/icepyx/blob/main/doc/examples/ICESat-2_DAAC_DataAccess_Example.ipynb).\n",
"\n",
"Questions? Be sure to check out the FAQs throughout this notebook, indicated as italic headings.\n",
"\n",
"### Credits\n",
"#### Credits\n",
"* notebook contributors: Zheng Liu, Jessica Scheick, and Amy Steiker\n",
"* some source material: [NSIDC Data Access Notebook](https://github.com/ICESAT-2HackWeek/ICESat2_hackweek_tutorials/tree/main/03_NSIDCDataAccess_Steiker) by Amy Steiker and Bruce Wallin"
]
Expand All @@ -21,7 +21,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## _What is SUBSETTING anyway?_\n",
"### _What is SUBSETTING anyway?_\n",
"\n",
"Anyone who's worked with geospatial data has probably encountered subsetting. Typically, we search for data wherever it is stored and download the chunks (aka granules, scenes, passes, swaths, etc.) that contain something we are interested in. Then, we have to extract from each chunk the pieces we actually want to analyze. Those pieces might be geospatial (i.e. an area of interest), temporal (i.e. certain months of a time series), and/or certain variables. This process of extracting the data we are going to use is called subsetting.\n",
"\n",
Expand All @@ -32,7 +32,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Import packages, including icepyx"
"### Import packages, including icepyx"
]
},
{
Expand All @@ -56,7 +56,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create a query object and log in to Earthdata\n",
"### Create a query object and log in to Earthdata\n",
"\n",
"For this example, we'll be working with a sea ice product (ATL09) for an area along West Greenland (Disko Bay)."
]
Expand Down Expand Up @@ -84,7 +84,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Discover Subsetting Options\n",
"### Discover Subsetting Options\n",
"\n",
"You can see what subsetting options are available for a given product by calling `show_custom_options()`. The options are presented as a series of headings followed by available values in square brackets. Headings are:\n",
"* **Subsetting Options**: whether or not temporal and spatial subsetting are available for the data product\n",
Expand Down Expand Up @@ -119,7 +119,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## _Why do I have to provide spatial bounds to icepyx even if I don't use them to subset my data order?_\n",
"### _Why do I have to provide spatial bounds to icepyx even if I don't use them to subset my data order?_\n",
"\n",
"Because they're still needed for the granule level search.\n",
"Spatial inputs are usually required for any data search, on any platform, even if your search parameters cover the entire globe.\n",
Expand All @@ -133,7 +133,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## About Data Variables in a query object\n",
"### About Data Variables in a query object\n",
"\n",
"A given ICESat-2 product may have over 200 variable + path combinations.\n",
"icepyx includes a custom `Variables` module that is \"aware\" of the ATLAS sensor and how the ICESat-2 data products are stored.\n",
Expand All @@ -146,7 +146,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Determine what variables are available for your data product\n",
"### Determine what variables are available for your data product\n",
"There are multiple ways to get a complete list of available variables.\n",
"To increase readability, some display options (2 and 3, below) show the 200+ variable + path combinations as a dictionary where the keys are variable names and the values are the paths to that variable.\n",
"\n",
Expand Down Expand Up @@ -184,7 +184,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## _Why not just download all the data and subset locally? What if I need more variables/granules?_\n",
"### _Why not just download all the data and subset locally? What if I need more variables/granules?_\n",
"\n",
"Taking advantage of the NSIDC subsetter is a great way to reduce your download size and thus your download time and the amount of storage required, especially if you're storing your data locally during analysis. By downloading your data using icepyx, it is easy to go back and get additional data with the same, similar, or different parameters (e.g. you can keep the same spatial and temporal bounds but change the variable list). Related tools (e.g. [`captoolkit`](https://github.com/fspaolo/captoolkit)) will let you easily merge files if you're uncomfortable merging them during read-in for processing."
]
Expand All @@ -193,7 +193,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Building the default wanted variable list"
"### Building the default wanted variable list"
]
},
{
Expand All @@ -219,7 +219,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Applying variable subsetting to your order and download\n",
"### Applying variable subsetting to your order and download\n",
"\n",
"In order to have your wanted variable list included with your order, you must pass it as a keyword argument to the `subsetparams()` attribute or the `order_granules()` or `download_granules()` (which calls `order_granules` under the hood if you have not already placed your order) functions."
]
Expand Down Expand Up @@ -267,7 +267,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## _Why does the subsetter say no matching data was found?_\n",
"### _Why does the subsetter say no matching data was found?_\n",
"Sometimes, granules (\"files\") returned in our initial search end up not containing any data in our specified area of interest.\n",
"This is because the initial search is completed using summary metadata for a granule.\n",
"You've likely encountered this before when viewing available imagery online: your spatial search turns up a bunch of images with only a few border or corner pixels, maybe even in no data regions, in your area of interest.\n",
Expand All @@ -278,7 +278,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Check the variable list in your downloaded file\n",
"### Check the variable list in your downloaded file\n",
"\n",
"Compare the available variables associated with the full product relative to those in your downloaded data file."
]
Expand All @@ -298,7 +298,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Check the downloaded data\n",
"#### Check the downloaded data\n",
"Get all `latitude` variables in your downloaded file:"
]
},
Expand Down Expand Up @@ -328,7 +328,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Compare to the variable paths available in the original data"
"#### Compare to the variable paths available in the original data"
]
},
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,12 +4,12 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Accessing ICESat-2 Data\n",
"## Data Query and Basic Download Example Notebook\n",
"## Accessing ICESat-2 Data\n",
"### Data Query and Basic Download Example Notebook\n",
"This notebook illustrates the use of icepyx for ICESat-2 data access and download from the NASA NSIDC DAAC (NASA National Snow and Ice Data Center Distributed Active Archive Center).\n",
"A complimentary notebook demonstrates in greater detail the [subsetting](https://github.com/icesat2py/icepyx/blob/main/doc/examples/ICESat-2_DAAC_DataAccess2_Subsetting.ipynb) options available when ordering data.\n",
"\n",
"### Credits\n",
"#### Credits\n",
"* original notebook by: Jessica Scheick\n",
"* notebook contributors: Amy Steiker and Tyler Sutterley\n",
"* source material: [NSIDC Data Access Notebook](https://github.com/ICESAT-2HackWeek/ICESat2_hackweek_tutorials/tree/master/03_NSIDCDataAccess_Steiker) by Amy Steiker and Bruce Wallin and [2020 Hackweek Data Access Notebook](https://github.com/ICESAT-2HackWeek/2020_ICESat-2_Hackweek_Tutorials/blob/main/06-07.Data_Access/02-Data_Access_rendered.ipynb) by Jessica Scheick and Amy Steiker"
Expand All @@ -19,7 +19,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Import packages, including icepyx"
"### Import packages, including icepyx"
]
},
{
Expand All @@ -38,7 +38,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Quick-Start\n",
"### Quick-Start\n",
"\n",
"The entire process of getting ICESat-2 data (from query to download) can ultimately be accomplished in three minimal lines of code:\n",
"\n",
Expand All @@ -57,7 +57,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Key Steps for Programmatic Data Access\n",
"### Key Steps for Programmatic Data Access\n",
"\n",
"There are several key steps for accessing data from the NSIDC API:\n",
"1. Define your parameters (spatial, temporal, dataset, etc.)\n",
Expand All @@ -74,7 +74,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create an ICESat-2 data object with the desired search parameters\n",
"### Create an ICESat-2 data object with the desired search parameters\n",
"\n",
"There are three required inputs, depending on how you want to search for data. Two are required in all cases:\n",
"- `short_name` = the data product of interest, known as its \"short name\".\n",
Expand Down Expand Up @@ -269,7 +269,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Built in methods allow us to get more information about our data product\n",
"### Built in methods allow us to get more information about our data product\n",
"In addition to viewing the stored object information shown above (e.g. product short name, start and end date and time, version, etc.), we can also request summary information about the data product itself or confirm that we have manually specified the latest version."
]
},
Expand Down Expand Up @@ -305,7 +305,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Querying a data product\n",
"### Querying a data product\n",
"In order to search the product collection for available data granules, we need to build our search parameters. This is done automatically behind the scenes when you run `region_a.avail_granules()`, but you can also build and view them by calling `region_a.CMRparams`. These are formatted as a dictionary of key:value pairs according to the [CMR documentation](https://cmr.earthdata.nasa.gov/search/site/docs/search/api.html)."
]
},
Expand Down Expand Up @@ -366,7 +366,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Log in to NASA Earthdata\n",
"### Log in to NASA Earthdata\n",
"In order to download any data from NSIDC, we must first authenticate ourselves using a valid Earthdata login. This will create a valid token to interface with the DAAC as well as start an active logged-in session to enable data download. Once you have successfully logged in for a given query instance, the token and session will be passed behind the scenes as needed for you to order and download data. Passwords are entered but not shown or stored in plain text by the system.\n",
"\n",
"There are multiple ways to provide your Earthdata credentials via icepyx.\n",
Expand All @@ -388,7 +388,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Additional Parameters and Subsetting\n",
"### Additional Parameters and Subsetting\n",
"\n",
"Once we have generated our session, we must build the required configuration parameters needed to actually download data. These will tell the system how we want to download the data. As with the CMR search parameters, these will be built automatically when you run `region_a.order_granules()`, but you can also create and view them with `region_a.reqparams`. The default parameters, given below, should work for most users.\n",
"- `page_size` = 2000. This is the number of granules we will request per order.\n",
Expand All @@ -397,7 +397,7 @@
"- `agent` = 'NO'\n",
"- `include_meta` = 'Y'\n",
"\n",
"### More details about the configuration parameters\n",
"#### More details about the configuration parameters\n",
"`request_mode` is \"asynchronous\" by default, which allows concurrent requests to be queued and processed without the need for a continuous connection between you and the API endpoint.\n",
"In contrast, using a \"synchronous\" `request_mode` means that the request relies on a direct, continous connection between you and the API endpoint.\n",
"Outputs are directly downloaded, or \"streamed\", to your working directory.\n",
Expand All @@ -423,7 +423,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Subsetting\n",
"#### Subsetting\n",
"\n",
"In addition to the required parameters (CMRparams and reqparams) that are submitted with our order, for ICESat-2 data products we can also submit subsetting parameters to NSIDC.\n",
"For a deeper dive into subsetting, please see our [Subsetting Tutorial Notebook](https://github.com/icesat2py/icepyx/blob/main/doc/examples/ICESat-2_DAAC_DataAccess2_Subsetting.ipynb), which covers subsetting in more detail, including how to get a list of subsetting options, how to build your list of subsetting parameters, and how to generate a list of desired variables (most datasets have more than 200 variable fields!), including using pre-built default lists (these lists are still in progress and we welcome contributions!).\n",
Expand Down Expand Up @@ -463,7 +463,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Place the order\n",
"#### Place the order\n",
"Then, we can send the order to NSIDC using the order_granules function. Information about the granules ordered and their status will be printed automatically. Status information can also be emailed to the address provided when the `email` kwarg is set to `True`. Additional information on the order, including request URLs, can be viewed by setting the optional keyword input 'verbose' to True."
]
},
Expand Down Expand Up @@ -491,7 +491,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Download the order\n",
"### Download the order\n",
"Finally, we can download our order to a specified directory (which needs to have a full path but doesn't have to point to an existing directory) and the download status will be printed as the program runs. Additional information is again available by using the optional boolean keyword `verbose`."
]
},
Expand Down
Loading

0 comments on commit 77a93b1

Please sign in to comment.