generated from iiasa/python-stub
-
Notifications
You must be signed in to change notification settings - Fork 1
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
1 parent
f3627b1
commit 96f0080
Showing
6 changed files
with
156 additions
and
6 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,24 @@ | ||
Configuring your RIME runs | ||
|
||
|
||
process_config.py | ||
================= | ||
|
||
This file is designed to configure settings and working directories for the project. It acts as a central configuration module to be imported across other scripts in the project, ensuring consistent configuration. | ||
|
||
Key Features | ||
------------ | ||
|
||
- **Central Configuration**: Stores and manages settings and directory paths that are used throughout the project. | ||
- **Easy Import**: Can be easily imported with ``from process_config import *``, making all configurations readily available in other scripts. | ||
|
||
Dependencies | ||
------------ | ||
|
||
- ``os``: For interacting with the operating system's file system, likely used to manage file paths and directories. | ||
|
||
Usage | ||
----- | ||
|
||
This script is not meant to be run directly. Instead, it should be imported at the beginning of other project scripts to ensure they have access to shared configurations, settings, and directory paths. | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,48 @@ | ||
Pre-processing input table data | ||
********************* | ||
|
||
To work with table data, some pre-processing is likely required to achieve the correct formats. | ||
|
||
The aim is to go from typically tabular or database data, into a compressed 4-D netCDF format that is used in the emulation. For a given climate impacts dataset, this pre-processing only needs to be done once for preparation, and only if working with table data. | ||
|
||
The output netCDF has the dimensions: | ||
"gmt": for the global mean temperature / warming levels, at which impacts are calculated. (float) | ||
"year": for the year to which the gmt corresponds, if relevant, for example relating to exposure of a population of land cover in year x. | ||
"ssp": for the Shared Socioeconomic Pathway, SSP1, SSP2, SSP3, SSP4, SSP5. (str) | ||
"region": for the spatial region for the impact relates and might be aggregated to, e.g. country, river basin, region. (str) | ||
|
||
|
||
Thus, the input data table should also have these dimensions, normally as columns, and additionally one for `variable`. | ||
|
||
[example picture of IAMC input file] | ||
|
||
The script `generate_aggregated_inputs.py` gives an example of this workflow, using a climate impacts dataset in table form (IAMC-wide), and converting it into a netCDF. In this case the data also has the `model` and `scenario` columns, which are not needed in the output dataset. | ||
|
||
generate_aggregated_inputs.py | ||
============================= | ||
|
||
|
||
|
||
Key Features | ||
------------ | ||
|
||
- **Data Aggregation**: Combines data from multiple files or data streams. | ||
- **File Operations**: Utilizes glob and os modules for file system operations, indicating manipulation of file paths and directories. | ||
- **Data Processing**: Imports ``xarray`` for working with multi-dimensional arrays, and ``pyam`` for integrated assessment modeling frameworks, suggesting complex data manipulation and analysis. | ||
|
||
Dependencies | ||
------------ | ||
|
||
- ``alive_progress``: For displaying progress bars in terminal. | ||
- ``glob``: For file path pattern matching. | ||
- ``os``: For interacting with the operating system's file system. | ||
- ``pyam``: For analysis and visualization of integrated assessment models. | ||
- ``re``: For regular expression matching, indicating text processing. | ||
- ``xarray``: For working with labeled multi-dimensional arrays. | ||
- ``time``: For timing operations, possibly used in performance measurement. | ||
|
||
Usage | ||
----- | ||
|
||
While specific usage instructions are not provided, it's likely that the script reads from specified input files or directories, processes the data, and outputs aggregated results. Usage may require customization based on the specific data format and desired output. | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,41 @@ | ||
Example script that takes input table of emissions scenarios with global temperature timeseries, and output maps of climate impacts through time as netCDF. Ouptut netCDF can be specified for either for 1 scenario and multiple climate impacts, or multiple scenarios for 1 indicator. | ||
|
||
This example script takes an input table of emissions scenarios along with global temperature time series and generates maps of climate impacts over time as NetCDF files. It exemplifies the application of the RIME framework to spatially resolved climate impact data, facilitating the visualization and analysis of geographic patterns in climate impacts. | ||
|
||
|
||
process_maps.py | ||
=============== | ||
|
||
This script is likely involved in processing geographical data, given its name suggests map-related functionalities. It may involve operations related to spatial data and possibly climate or environmental data analysis. | ||
|
||
Key Features | ||
------------ | ||
|
||
- **Geographical Data Processing**: Implied by the name, it might handle operations on map data, such as transforming, analyzing, or visualizing geographical information. | ||
- **Data Handling**: The script might deal with large datasets, considering the use of ``dask``, which is known for parallel computing and efficient data processing. | ||
|
||
Dependencies | ||
------------ | ||
|
||
- ``dask``: For parallel computing in Python, needed for handling large datasets and efficient computation. | ||
|
||
Usage | ||
----- | ||
|
||
|
||
|
||
|
||
process_maps.py | ||
=============== | ||
|
||
This example script takes an input table of emissions scenarios along with global temperature time series and generates maps of climate impacts over time as NetCDF files. It exemplifies the application of the RIME framework to spatially resolved climate impact data, facilitating the visualization and analysis of geographic patterns in climate impacts. | ||
|
||
Overview | ||
-------- | ||
|
||
The script's flexibility allows for the specification of outputs either for a single scenario across multiple climate impacts or for multiple scenarios focused on a single indicator. This adaptability makes it a valuable tool for in-depth climate impact studies that require spatial analysis and visualization. | ||
|
||
Usage | ||
----- | ||
|
||
By processing emissions scenarios and associated temperature projections, ``process_maps.py`` produces NetCDF files that map climate impacts over time. These outputs are instrumental in visualizing the geographic distribution and evolution of climate impacts, aiding in the interpretation and communication of complex climate data. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,29 @@ | ||
Example script that takes input table of emissions scenarios with global temperature timeseries, and output tables of climate impacts data in IAMC format. Can be done for multiple scenarios and indicators at a time. | ||
|
||
|
||
|
||
|
||
|
||
process_tabledata.py | ||
==================== | ||
|
||
This script is intended for processing table data, potentially involving large datasets given the use of Dask for parallel computing. It likely includes functionalities for reading, processing, and possibly aggregating or summarizing table data. | ||
|
||
Key Features | ||
------------ | ||
|
||
- **Table Data Processing**: Focuses on operations related to table data, including reading, manipulation, and analysis. | ||
- **Parallel Computing**: Utilizes Dask for efficient handling of large datasets, indicating the script is optimized for performance. | ||
|
||
Dependencies | ||
------------ | ||
|
||
- ``dask``: For parallel computing, particularly with ``dask.dataframe`` which is similar to pandas but with parallel computing capabilities. | ||
- ``dask.diagnostics``: For performance diagnostics and progress bars, providing tools for profiling and resource management during computation. | ||
- ``dask.distributed``: For distributed computing, allowing the script to scale across multiple nodes if necessary. | ||
|
||
Usage | ||
----- | ||
|
||
The script is structured to be executed directly with a ``__main__`` block. It imports configurations from ``process_config.py`` and functions from ``rime_functions.py``, suggesting it integrates closely with other components of the project. Users may need to customize the script to fit their specific data formats and processing requirements. | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters