diff --git a/.gitignore b/.gitignore index c28dee5..5580ed4 100644 --- a/.gitignore +++ b/.gitignore @@ -2,7 +2,7 @@ ckp/ rollout/ rollouts/ -wandb +wandb/ *.out datasets baselines diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index fc49dbe..db02477 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -19,7 +19,7 @@ repos: - id: check-yaml - id: requirements-txt-fixer - repo: https://github.com/astral-sh/ruff-pre-commit - rev: 'v0.1.8' + rev: 'v0.2.2' hooks: - id: ruff args: [ --fix ] diff --git a/README.md b/README.md index a25796e..5249548 100644 --- a/README.md +++ b/README.md @@ -74,7 +74,7 @@ pip install --upgrade jax[cuda12_pip]==0.4.20 -f https://storage.googleapis.com/ ### MacOS Currently, only the CPU installation works. You will need to change a few small things to get it going: - Clone installation: in `pyproject.toml` change the torch version from `2.1.0+cpu` to `2.1.0`. Then, remove the `poetry.lock` file and run `poetry install --only main`. -- Configs: You will need to set `f64: False` and `num_workers: 0` in the `configs/` files. +- Configs: You will need to set `dtype=float32` and `train.num_workers=0`. Although the current [`jax-metal==0.0.5` library](https://pypi.org/project/jax-metal/) supports jax in general, there seems to be a missing feature used by `jax-md` related to padding -> see [this issue](https://github.com/google/jax/issues/16366#issuecomment-1591085071). @@ -83,39 +83,39 @@ Although the current [`jax-metal==0.0.5` library](https://pypi.org/project/jax-m A general tutorial is provided in the example notebook "Training GNS on the 2D Taylor Green Vortex" under `./notebooks/tutorial.ipynb` on the [LagrangeBench repository](https://github.com/tumaer/lagrangebench). The notebook covers the basics of LagrangeBench, such as loading a dataset, setting up a case, training a model from scratch and evaluating its performance. ### Running in a local clone (`main.py`) -Alternatively, experiments can also be set up with `main.py`, based on extensive YAML config files and cli arguments (check [`configs/`](configs/)). By default, the arguments have priority as: 1) passed cli arguments, 2) YAML config and 3) [`defaults.py`](lagrangebench/defaults.py) (`lagrangebench` defaults). +Alternatively, experiments can also be set up with `main.py`, based on extensive YAML config files and cli arguments (check [`configs/`](configs/)). By default, the arguments have priority as 1) passed cli arguments, 2) YAML config and 3) [`defaults.py`](lagrangebench/defaults.py) (`lagrangebench` defaults). -When loading a saved model with `--model_dir` the config from the checkpoint is automatically loaded and training is restarted. For more details check the [`experiments/`](experiments/) directory and the [`run.py`](experiments/run.py) file. +When loading a saved model with `load_ckp` the config from the checkpoint is automatically loaded and training is restarted. For more details check the [`runner.py`](lagrangebench/runner.py) file. **Train** For example, to start a _GNS_ run from scratch on the RPF 2D dataset use ``` -python main.py --config configs/rpf_2d/gns.yaml +python main.py config=configs/rpf_2d/gns.yaml ``` Some model presets can be found in `./configs/`. -If `--mode=all`, then training (`--mode=train`) and subsequent inference (`--mode=infer`) on the test split will be run in one go. +If `mode=all` is provided, then training (`mode=train`) and subsequent inference (`mode=infer`) on the test split will be run in one go. **Restart training** -To restart training from the last checkpoint in `--model_dir` use +To restart training from the last checkpoint in `load_ckp` use ``` -python main.py --model_dir ckp/gns_rpf2d_yyyymmdd-hhmmss +python main.py load_ckp=ckp/gns_rpf2d_yyyymmdd-hhmmss ``` **Inference** -To evaluate a trained model from `--model_dir` on the test split (`--test`) use +To evaluate a trained model from `load_ckp` on the test split (`test=True`) use ``` -python main.py --model_dir ckp/gns_rpf2d_yyyymmdd-hhmmss/best --rollout_dir rollout/gns_rpf2d_yyyymmdd-hhmmss/best --mode infer --test +python main.py load_ckp=ckp/gns_rpf2d_yyyymmdd-hhmmss/best rollout_dir=rollout/gns_rpf2d_yyyymmdd-hhmmss/best mode=infer test=True ``` -If the default `--out_type_infer=pkl` is active, then the generated trajectories and a `metricsYYYY_MM_DD_HH_MM_SS.pkl` file will be written to the `--rollout_dir`. The metrics file contains all `--metrics_infer` properties for each generated rollout. +If the default `eval.infer.out_type=pkl` is active, then the generated trajectories and a `metricsYYYY_MM_DD_HH_MM_SS.pkl` file will be written to `eval.rollout_dir`. The metrics file contains all `eval.infer.metrics` properties for each generated rollout. ## Datasets -The datasets are hosted on Zenodo under the DOI: [10.5281/zenodo.10021925](https://zenodo.org/doi/10.5281/zenodo.10021925). When creating a new dataset instance, the data is automatically downloaded. Alternatively, to manually download them use the `download_data.sh` shell script, either with a specific dataset name or "all". Namely +The datasets are hosted on Zenodo under the DOI: [10.5281/zenodo.10021925](https://zenodo.org/doi/10.5281/zenodo.10021925). If a dataset is not found in `dataset_path`, the data is automatically downloaded. Alternatively, to manually download the datasets use the `download_data.sh` shell script, either with a specific dataset name or "all". Namely - __Taylor Green Vortex 2D__: `bash download_data.sh tgv_2d datasets/` - __Reverse Poiseuille Flow 2D__: `bash download_data.sh rpf_2d datasets/` - __Lid Driven Cavity 2D__: `bash download_data.sh ldc_2d datasets/` @@ -129,7 +129,7 @@ The datasets are hosted on Zenodo under the DOI: [10.5281/zenodo.10021925](https ### Notebooks We provide three notebooks that show LagrangeBench functionalities, namely: - [`tutorial.ipynb`](notebooks/tutorial.ipynb) [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/tumaer/lagrangebench/blob/main/notebooks/tutorial.ipynb), with a general overview of LagrangeBench library, with training and evaluation of a simple GNS model, -- [`datasets.ipynb`](notebooks/datasets.ipynb) [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/tumaer/lagrangebench/blob/main/notebooks/datasets.ipynb), with more details and visualizations on the datasets, and +- [`datasets.ipynb`](notebooks/datasets.ipynb) [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/tumaer/lagrangebench/blob/main/notebooks/datasets.ipynb), with more details and visualizations of the datasets, and - [`gns_data.ipynb`](notebooks/gns_data.ipynb) [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/tumaer/lagrangebench/blob/main/notebooks/gns_data.ipynb), showing how to train models within LagrangeBench on the datasets from the paper [Learning to Simulate Complex Physics with Graph Networks](https://arxiv.org/abs/2002.09405). ## Directory structure @@ -144,7 +144,8 @@ We provide three notebooks that show LagrangeBench functionalities, namely: ┃ ┗ 📜utils.py ┣ 📂evaluate # Evaluation and rollout generation tools ┃ ┣ 📜metrics.py - ┃ ┗ 📜rollout.py + ┃ ┣ 📜rollout.py + ┃ ┗ 📜utils.py ┣ 📂models # Baseline models ┃ ┣ 📜base.py # BaseModel class ┃ ┣ 📜egnn.py @@ -157,6 +158,7 @@ We provide three notebooks that show LagrangeBench functionalities, namely: ┃ ┣ 📜strats.py # Training tricks ┃ ┗ 📜trainer.py # Trainer method ┣ 📜defaults.py # Default values + ┣ 📜runner.py # Runner wrapping training and inference ┗ 📜utils.py ``` @@ -167,9 +169,9 @@ Welcome! We highly appreciate [Github issues](https://github.com/tumaer/lagrange You can also chat with us on [**Discord**](https://discord.gg/Ds8jRZ78hU). ### Contributing Guideline -If you want to contribute to this repository, you will need the dev depencencies, i.e. +If you want to contribute to this repository, you will need the dev dependencies, i.e. install the environment with `poetry install` without the ` --only main` flag. -Then, we also recommend you to install the pre-commit hooks +Then, we also recommend you install the pre-commit hooks if you don't want to manually run `pre-commit run` before each commit. To sum up: ```bash @@ -181,6 +183,10 @@ source $PATH_TO_LAGRANGEBENCH_VENV/bin/activate # install pre-commit hooks defined in .pre-commit-config.yaml # ruff is configured in pyproject.toml pre-commit install + +# if you want to bump the version in both pyproject.toml and __init__.py, do +poetry self add poetry-bumpversion +poetry version patch # or minor/major ``` After you have run `git add ` and try to `git commit`, the pre-commit hook will @@ -195,10 +201,11 @@ pytest ### Clone vs Library LagrangeBench can be installed by cloning the repository or as a standalone library. This offers more flexibility, but it also comes with its disadvantages: the necessity to implement some things twice. If you change any of the following things, make sure to update its counterpart as well: -- General setup in `experiments/` and `notebooks/tutorial.ipynb` +- General setup in `lagrangebench/runner.py` and `notebooks/tutorial.ipynb` - Configs in `configs/` and `lagrangebench/defaults.py` - Zenodo URLs in `download_data.sh` and `lagrangebench/data/data.py` - Dependencies in `pyproject.toml`, `requirements_cuda.txt`, and `docs/requirements.txt` +- Library version in `pyproject.toml` and `lagrangebench/__init__.py` ## Citation @@ -229,6 +236,7 @@ The associated datasets can be cited as: ### Publications -The following further publcations are based on the LagrangeBench codebase: +The following further publications are based on the LagrangeBench codebase: 1. [Learning Lagrangian Fluid Mechanics with E(3)-Equivariant Graph Neural Networks (GSI 2023)](https://arxiv.org/abs/2305.15603), A. P. Toshev, G. Galletti, J. Brandstetter, S. Adami, N. A. Adams +2. [Neural SPH: Improved Neural Modeling of Lagrangian Fluid Dynamics](https://arxiv.org/abs/2402.06275), A. P. Toshev, J. A. Erbesdobler, N. A. Adams, J. Brandstetter diff --git a/configs/WaterDrop_2d/base.yaml b/configs/WaterDrop_2d/base.yaml deleted file mode 100644 index be27172..0000000 --- a/configs/WaterDrop_2d/base.yaml +++ /dev/null @@ -1,6 +0,0 @@ -extends: defaults.yaml - -data_dir: /tmp/datasets/WaterDrop -wandb_project: waterdrop_2d - -neighbor_list_backend: matscipy diff --git a/configs/WaterDrop_2d/gns.yaml b/configs/WaterDrop_2d/gns.yaml index b89287a..3a745a7 100644 --- a/configs/WaterDrop_2d/gns.yaml +++ b/configs/WaterDrop_2d/gns.yaml @@ -1,6 +1,19 @@ -extends: WaterDrop_2d/base.yaml +extends: LAGRANGEBENCH_DEFAULTS -model: gns -num_mp_steps: 10 -latent_dim: 128 -lr_start: 5.e-4 +main: + dataset_path: /tmp/datasets/WaterDrop + +model: + name: gns + num_mp_steps: 10 + latent_dim: 128 + +train: + optimizer: + lr_start: 5.e-4 + +logging: + wandb_project: waterdrop_2d + +neighbors: + backend: matscipy diff --git a/configs/dam_2d/base.yaml b/configs/dam_2d/base.yaml index be1d3bd..3639e7c 100644 --- a/configs/dam_2d/base.yaml +++ b/configs/dam_2d/base.yaml @@ -1,7 +1,9 @@ -extends: defaults.yaml +extends: LAGRANGEBENCH_DEFAULTS -data_dir: datasets/2D_DAM_5740_20kevery100 -wandb_project: dam_2d +dataset_path: datasets/2D_DAM_5740_20kevery100 -neighbor_list_multiplier: 2.0 -noise_std: 0.001 +logging: + wandb_project: dam_2d + +neighbors: + multiplier: 2.0 diff --git a/configs/dam_2d/gns.yaml b/configs/dam_2d/gns.yaml index 1b5891e..4cabc0e 100644 --- a/configs/dam_2d/gns.yaml +++ b/configs/dam_2d/gns.yaml @@ -1,6 +1,11 @@ -extends: dam_2d/base.yaml +extends: configs/dam_2d/base.yaml -model: gns -num_mp_steps: 10 -latent_dim: 128 -lr_start: 5.e-4 +model: + name: gns + num_mp_steps: 10 + latent_dim: 128 + +train: + noise_std: 0.001 + optimizer: + lr_start: 5.e-4 diff --git a/configs/dam_2d/segnn.yaml b/configs/dam_2d/segnn.yaml index e7facf7..c50ce85 100644 --- a/configs/dam_2d/segnn.yaml +++ b/configs/dam_2d/segnn.yaml @@ -1,8 +1,12 @@ -extends: dam_2d/base.yaml +extends: configs/dam_2d/base.yaml -model: segnn -num_mp_steps: 10 -latent_dim: 64 -lr_start: 5.e-4 +model: + name: segnn + num_mp_steps: 10 + latent_dim: 64 + isotropic_norm: True -isotropic_norm: True +train: + noise_std: 0.001 + optimizer: + lr_start: 5.e-4 diff --git a/configs/defaults.yaml b/configs/defaults.yaml deleted file mode 100644 index 0771f6a..0000000 --- a/configs/defaults.yaml +++ /dev/null @@ -1,118 +0,0 @@ -# Fallback parameters for the config file. These are overwritten by the config file. -extends: -# Model settings -# Model architecture name. gns, segnn, egnn -model: -# Length of the position input sequence -input_seq_length: 6 -# Number of message passing steps -num_mp_steps: 10 -# Number of MLP layers -num_mlp_layers: 2 -# Hidden dimension -latent_dim: 128 -# Load checkpointed model from this directory -model_dir: -# SEGNN only parameters -# Steerable attributes level -lmax_attributes: 1 -# Level of the hidden layer -lmax_hidden: 1 -# SEGNN normalization. instance, batch, none -segnn_norm: none -# SEGNN velocity aggregation. avg or last -velocity_aggregate: avg - -# Optimization settings -# Max steps -step_max: 500000 -# Batch size -batch_size: 1 -# Starting learning rate -lr_start: 1.e-4 -# Final learning rate after decay -lr_final: 1.e-6 -# Rate of learning rate decay -lr_decay_rate: 0.1 -# Number of steps for the learning rate to decay -lr_decay_steps: 1.e+5 -# Standard deviation for the additive noise -noise_std: 0.0003 -# Whether to use magnitudes or not -magnitude_features: False -# Whether to normalize inputs and outputs with the same value in x, y, ans z. -isotropic_norm: False -# Parameters related to the push-forward trick -pushforward: - # At which training step to introduce next unroll stage - steps: [-1, 200000, 300000, 400000] - # For how many steps to unroll - unrolls: [0, 1, 2, 3] - # Which probability ratio to keep between the unrolls - probs: [18, 2, 1, 1] - -# Loss settings -# Loss weight for position, acceleration, and velocity components -loss_weight: - acc: 1.0 - -# Run settings -# train, infer, all -mode: all -# Dataset directory -data_dir: -# Number of rollout steps. If "-1", then defaults to sequence_length - input_seq_len. -# n_rollout_steps must be <= ground truth len. For extrapolation use n_extrap_steps -n_rollout_steps: 20 -# Number of evaluation trajectories. "-1" for all available -eval_n_trajs: 50 -# Number of extrapolation steps -n_extrap_steps: 0 -# Whether to use test or validation split -test: False -# Seed -seed: 0 -# Cuda device. "-1" for cpu -gpu: 0 -# GPU memory allocation https://jax.readthedocs.io/en/latest/gpu_memory_allocation.html -xla_mem_fraction: 0.75 -# Double precision everywhere other than the ML model -f64: True -# Neighbour list backend. jaxmd_vmap, jaxmd_scan, matscipy -neighbor_list_backend: jaxmd_vmap -# Neighbour list capacity multiplier -neighbor_list_multiplier: 1.25 -# number of workers for data loading -num_workers: 4 - -# Logging settings -# Use wandb for logging -wandb: False -wandb_project: False -# Change this with your own entity -wandb_entity: lagrangebench -# Number of steps between training logging -log_steps: 1000 -# Number of steps between evaluation -eval_steps: 10000 -# Checkpoint directory -ckp_dir: ckp -# Rollout/metrics directory -rollout_dir: -# Rollout storage format. vtk, pkl, none -out_type: none -# List of metrics. mse, mae, sinkhorn, e_kin -metrics: - - mse -metrics_stride: 10 - -# Inference params (valid/test) -metrics_infer: - - mse - - sinkhorn - - e_kin -metrics_stride_infer: 1 -out_type_infer: pkl -eval_n_trajs_infer: -1 -# batch size for validation/testing -batch_size_infer: 2 diff --git a/configs/ldc_2d/base.yaml b/configs/ldc_2d/base.yaml index d9fdc96..69ff382 100644 --- a/configs/ldc_2d/base.yaml +++ b/configs/ldc_2d/base.yaml @@ -1,7 +1,9 @@ -extends: defaults.yaml +extends: LAGRANGEBENCH_DEFAULTS -data_dir: datasets/2D_LDC_2708_10kevery100 -wandb_project: ldc_2d +dataset_path: datasets/2D_LDC_2708_10kevery100 -neighbor_list_multiplier: 2.0 -noise_std: 0.001 +logging: + wandb_project: ldc_2d + +neighbors: + multiplier: 2.0 \ No newline at end of file diff --git a/configs/ldc_2d/gns.yaml b/configs/ldc_2d/gns.yaml index fda8aea..39eeb31 100644 --- a/configs/ldc_2d/gns.yaml +++ b/configs/ldc_2d/gns.yaml @@ -1,6 +1,11 @@ -extends: ldc_2d/base.yaml +extends: configs/ldc_2d/base.yaml -model: gns -num_mp_steps: 10 -latent_dim: 128 -lr_start: 5.e-4 +model: + name: gns + num_mp_steps: 10 + latent_dim: 128 + +train: + noise_std: 0.001 + optimizer: + lr_start: 5.e-4 diff --git a/configs/ldc_2d/segnn.yaml b/configs/ldc_2d/segnn.yaml index 1adece6..59230f0 100644 --- a/configs/ldc_2d/segnn.yaml +++ b/configs/ldc_2d/segnn.yaml @@ -1,8 +1,12 @@ -extends: ldc_2d/base.yaml +extends: configs/ldc_2d/base.yaml -model: segnn -num_mp_steps: 10 -latent_dim: 64 -lr_start: 5.e-4 +model: + name: segnn + num_mp_steps: 10 + latent_dim: 64 + isotropic_norm: True -isotropic_norm: True +train: + noise_std: 0.001 + optimizer: + lr_start: 5.e-4 diff --git a/configs/ldc_3d/base.yaml b/configs/ldc_3d/base.yaml index 5dfb668..19fb3fc 100644 --- a/configs/ldc_3d/base.yaml +++ b/configs/ldc_3d/base.yaml @@ -1,6 +1,9 @@ -extends: defaults.yaml +extends: LAGRANGEBENCH_DEFAULTS -data_dir: datasets/3D_LDC_8160_10kevery100 -wandb_project: ldc_3d +dataset_path: datasets/3D_LDC_8160_10kevery100 -neighbor_list_multiplier: 2.0 +logging: + wandb_project: ldc_3d + +neighbors: + multiplier: 2.0 \ No newline at end of file diff --git a/configs/ldc_3d/gns.yaml b/configs/ldc_3d/gns.yaml index dbf14b4..cacf6bb 100644 --- a/configs/ldc_3d/gns.yaml +++ b/configs/ldc_3d/gns.yaml @@ -1,6 +1,10 @@ -extends: ldc_3d/base.yaml +extends: configs/ldc_3d/base.yaml -model: gns -num_mp_steps: 10 -latent_dim: 128 -lr_start: 5.e-4 +model: + name: gns + num_mp_steps: 10 + latent_dim: 128 + +train: + optimizer: + lr_start: 5.e-4 diff --git a/configs/ldc_3d/segnn.yaml b/configs/ldc_3d/segnn.yaml index fa4844c..88adf11 100644 --- a/configs/ldc_3d/segnn.yaml +++ b/configs/ldc_3d/segnn.yaml @@ -1,8 +1,11 @@ -extends: ldc_3d/base.yaml +extends: configs/ldc_3d/base.yaml -model: segnn -num_mp_steps: 10 -latent_dim: 64 -lr_start: 5.e-4 +model: + name: segnn + num_mp_steps: 10 + latent_dim: 64 + isotropic_norm: True -isotropic_norm: True +train: + optimizer: + lr_start: 5.e-4 diff --git a/configs/rpf_2d/base.yaml b/configs/rpf_2d/base.yaml index 0916557..ddc2a0e 100644 --- a/configs/rpf_2d/base.yaml +++ b/configs/rpf_2d/base.yaml @@ -1,4 +1,6 @@ -extends: defaults.yaml +extends: LAGRANGEBENCH_DEFAULTS -data_dir: datasets/2D_RPF_3200_20kevery100 -wandb_project: rpf_2d +dataset_path: datasets/2D_RPF_3200_20kevery100 + +logging: + wandb_project: rpf_2d \ No newline at end of file diff --git a/configs/rpf_2d/egnn.yaml b/configs/rpf_2d/egnn.yaml index 82ab3b3..21e4ef9 100644 --- a/configs/rpf_2d/egnn.yaml +++ b/configs/rpf_2d/egnn.yaml @@ -1,13 +1,16 @@ -extends: rpf_2d/base.yaml +extends: configs/rpf_2d/base.yaml -model: egnn -num_mp_steps: 5 -latent_dim: 128 -lr_start: 1.e-4 +model: + name: egnn + num_mp_steps: 5 + latent_dim: 128 + isotropic_norm: True + magnitude_features: True -isotropic_norm: True -magnitude_features: True -loss_weight: - pos: 1.0 - vel: 0.0 - acc: 0.0 +train: + optimizer: + lr_start: 5.e-4 + loss_weight: + pos: 1.0 + vel: 0.0 + acc: 0.0 diff --git a/configs/rpf_2d/gns.yaml b/configs/rpf_2d/gns.yaml index 87c2e81..6313033 100644 --- a/configs/rpf_2d/gns.yaml +++ b/configs/rpf_2d/gns.yaml @@ -1,6 +1,10 @@ -extends: rpf_2d/base.yaml +extends: configs/rpf_2d/base.yaml -model: gns -num_mp_steps: 10 -latent_dim: 128 -lr_start: 5.e-4 +model: + name: gns + num_mp_steps: 10 + latent_dim: 128 + +train: + optimizer: + lr_start: 5.e-4 diff --git a/configs/rpf_2d/painn.yaml b/configs/rpf_2d/painn.yaml index 95c4e91..82907f9 100644 --- a/configs/rpf_2d/painn.yaml +++ b/configs/rpf_2d/painn.yaml @@ -1,9 +1,12 @@ -extends: rpf_2d/base.yaml +extends: configs/rpf_2d/base.yaml -model: painn -num_mp_steps: 5 -latent_dim: 128 -lr_start: 1.e-4 +model: + name: painn + num_mp_steps: 5 + latent_dim: 128 + isotropic_norm: True + magnitude_features: True -isotropic_norm: True -magnitude_features: True +train: + optimizer: + lr_start: 1.e-4 diff --git a/configs/rpf_2d/segnn.yaml b/configs/rpf_2d/segnn.yaml index e65e2b4..f447336 100644 --- a/configs/rpf_2d/segnn.yaml +++ b/configs/rpf_2d/segnn.yaml @@ -1,8 +1,11 @@ -extends: rpf_2d/base.yaml +extends: configs/rpf_2d/base.yaml -model: segnn -num_mp_steps: 10 -latent_dim: 64 -lr_start: 1.e-3 +model: + name: segnn + num_mp_steps: 10 + latent_dim: 64 + isotropic_norm: True -isotropic_norm: True +train: + optimizer: + lr_start: 1.e-3 diff --git a/configs/rpf_3d/base.yaml b/configs/rpf_3d/base.yaml index 7a20c34..ef44b56 100644 --- a/configs/rpf_3d/base.yaml +++ b/configs/rpf_3d/base.yaml @@ -1,4 +1,6 @@ -extends: defaults.yaml +extends: LAGRANGEBENCH_DEFAULTS -data_dir: datasets/3D_RPF_8000_10kevery100 -wandb_project: rpf_3d +dataset_path: datasets/3D_RPF_8000_10kevery100 + +logging: + wandb_project: rpf_3d \ No newline at end of file diff --git a/configs/rpf_3d/egnn.yaml b/configs/rpf_3d/egnn.yaml index 1f793ff..8bdb928 100644 --- a/configs/rpf_3d/egnn.yaml +++ b/configs/rpf_3d/egnn.yaml @@ -1,13 +1,16 @@ -extends: rpf_3d/base.yaml +extends: configs/rpf_3d/base.yaml -model: egnn -num_mp_steps: 5 -latent_dim: 128 -lr_start: 1.e-4 +model: + name: egnn + num_mp_steps: 5 + latent_dim: 128 + isotropic_norm: True + magnitude_features: True -isotropic_norm: True -magnitude_features: True -loss_weight: - pos: 1.0 - vel: 0.0 - acc: 0.0 +train: + optimizer: + lr_start: 1.e-4 + loss_weight: + pos: 1.0 + vel: 0.0 + acc: 0.0 diff --git a/configs/rpf_3d/gns.yaml b/configs/rpf_3d/gns.yaml index 8bb2053..4deb161 100644 --- a/configs/rpf_3d/gns.yaml +++ b/configs/rpf_3d/gns.yaml @@ -1,6 +1,10 @@ -extends: rpf_3d/base.yaml +extends: configs/rpf_3d/base.yaml -model: gns -num_mp_steps: 10 -latent_dim: 128 -lr_start: 5.e-4 +model: + name: gns + num_mp_steps: 10 + latent_dim: 128 + +train: + optimizer: + lr_start: 5.e-4 diff --git a/configs/rpf_3d/painn.yaml b/configs/rpf_3d/painn.yaml index cdd5b62..e6e05d9 100644 --- a/configs/rpf_3d/painn.yaml +++ b/configs/rpf_3d/painn.yaml @@ -1,9 +1,12 @@ -extends: rpf_3d/base.yaml +extends: configs/rpf_3d/base.yaml -model: painn -num_mp_steps: 5 -latent_dim: 128 -lr_start: 5.e-4 +model: + name: painn + num_mp_steps: 5 + latent_dim: 128 + isotropic_norm: True + magnitude_features: True -isotropic_norm: True -magnitude_features: True +train: + optimizer: + lr_start: 5.e-4 diff --git a/configs/rpf_3d/segnn.yaml b/configs/rpf_3d/segnn.yaml index 0f6e6db..813c931 100644 --- a/configs/rpf_3d/segnn.yaml +++ b/configs/rpf_3d/segnn.yaml @@ -1,8 +1,11 @@ -extends: rpf_3d/base.yaml +extends: configs/rpf_3d/base.yaml -model: segnn -num_mp_steps: 10 -latent_dim: 64 -lr_start: 1.e-3 +model: + name: segnn + num_mp_steps: 10 + latent_dim: 64 + isotropic_norm: True -isotropic_norm: True +train: + optimizer: + lr_start: 1.e-3 diff --git a/configs/tgv_2d/base.yaml b/configs/tgv_2d/base.yaml index f37268e..434a9a2 100644 --- a/configs/tgv_2d/base.yaml +++ b/configs/tgv_2d/base.yaml @@ -1,4 +1,6 @@ -extends: defaults.yaml +extends: LAGRANGEBENCH_DEFAULTS -data_dir: datasets/2D_TGV_2500_10kevery100 -wandb_project: tgv_2d +dataset_path: datasets/2D_TGV_2500_10kevery100 + +logging: + wandb_project: tgv_2d diff --git a/configs/tgv_2d/gns.yaml b/configs/tgv_2d/gns.yaml index 49c2330..17e7b64 100644 --- a/configs/tgv_2d/gns.yaml +++ b/configs/tgv_2d/gns.yaml @@ -1,6 +1,10 @@ -extends: tgv_2d/base.yaml +extends: configs/tgv_2d/base.yaml -model: gns -num_mp_steps: 10 -latent_dim: 128 -lr_start: 5.e-4 +model: + name: gns + num_mp_steps: 10 + latent_dim: 128 + +train: + optimizer: + lr_start: 5.e-4 diff --git a/configs/tgv_2d/segnn.yaml b/configs/tgv_2d/segnn.yaml index 865fce3..ba3742b 100644 --- a/configs/tgv_2d/segnn.yaml +++ b/configs/tgv_2d/segnn.yaml @@ -1,8 +1,11 @@ -extends: tgv_2d/base.yaml +extends: configs/tgv_2d/base.yaml -model: segnn -num_mp_steps: 10 -latent_dim: 64 -lr_start: 5.e-4 +model: + name: segnn + num_mp_steps: 10 + latent_dim: 64 + isotropic_norm: True -isotropic_norm: True +train: + optimizer: + lr_start: 5.e-4 diff --git a/configs/tgv_3d/base.yaml b/configs/tgv_3d/base.yaml index 7c655e4..9f78547 100644 --- a/configs/tgv_3d/base.yaml +++ b/configs/tgv_3d/base.yaml @@ -1,4 +1,6 @@ -extends: defaults.yaml +extends: LAGRANGEBENCH_DEFAULTS -data_dir: datasets/3D_TGV_8000_10kevery100 -wandb_project: tgv_3d +dataset_path: datasets/3D_TGV_8000_10kevery100 + +logging: + wandb_project: tgv_3d diff --git a/configs/tgv_3d/gns.yaml b/configs/tgv_3d/gns.yaml index cf0b741..dd6dd84 100644 --- a/configs/tgv_3d/gns.yaml +++ b/configs/tgv_3d/gns.yaml @@ -1,6 +1,10 @@ -extends: tgv_3d/base.yaml +extends: configs/tgv_3d/base.yaml -model: gns -num_mp_steps: 10 -latent_dim: 128 -lr_start: 5.e-4 +model: + name: gns + num_mp_steps: 10 + latent_dim: 128 + +train: + optimizer: + lr_start: 5.e-4 diff --git a/configs/tgv_3d/segnn.yaml b/configs/tgv_3d/segnn.yaml index ebc81cc..fab105a 100644 --- a/configs/tgv_3d/segnn.yaml +++ b/configs/tgv_3d/segnn.yaml @@ -1,8 +1,11 @@ -extends: tgv_3d/base.yaml +extends: configs/tgv_3d/base.yaml -model: segnn -num_mp_steps: 10 -latent_dim: 64 -lr_start: 5.e-4 +model: + name: segnn + num_mp_steps: 10 + latent_dim: 64 + isotropic_norm: True -isotropic_norm: True +train: + optimizer: + lr_start: 5.e-4 diff --git a/docs/conf.py b/docs/conf.py index 589f76c..bb44253 100644 --- a/docs/conf.py +++ b/docs/conf.py @@ -10,7 +10,11 @@ copyright = "2023, Chair of Aerodynamics and Fluid Mechanics, TUM" author = "Artur Toshev, Gianluca Galletti" -version = "0.0.1" +# read the version from pyproject.toml +import toml + +pyproject = toml.load("../pyproject.toml") +version = pyproject["tool"]["poetry"]["version"] # -- Path setup -------------------------------------------------------------- @@ -34,6 +38,8 @@ "sphinx.ext.napoleon", "sphinx.ext.intersphinx", "sphinx.ext.mathjax", + # to get defaults.py in the documentation + "sphinx_exec_code", ] numfig = True @@ -58,6 +64,11 @@ } +# -- Options for sphinx-exec-code --------------------------------------------- + +exec_code_working_dir = ".." + + # drop the docstrings of undocumented the namedtuple attributes def remove_namedtuple_attrib_docstring(app, what, name, obj, skip, options): if type(obj) is collections._tuplegetter: diff --git a/docs/index.rst b/docs/index.rst index a881d8e..001a733 100644 --- a/docs/index.rst +++ b/docs/index.rst @@ -51,9 +51,9 @@ preprocessing, and time integration. import lagrangebench # Load data - data_train = lagrangebench.data.RPF2D("train") - data_valid = lagrangebench.data.RPF2D("valid", is_rollout=True) - data_test = lagrangebench.data.RPF2D("test", is_rollout=True) + data_train = lagrangebench.RPF2D("train") + data_valid = lagrangebench.RPF2D("valid", extra_seq_length=20) + data_test = lagrangebench.RPF2D("test", extra_seq_length=20) # Case setup (preprocessing and graph building) bounds = np.array(data_train.metadata["bounds"]) @@ -78,8 +78,8 @@ Initialize a GNS model. return lagrangebench.models.GNS( particle_dimension=data_train.metadata["dim"], latent_size=16, - num_mlp_layers=2, - num_message_passing_steps=4, + blocks_per_step=2, + num_mp_steps=4, particle_type_embedding_size=8, )(x) @@ -98,12 +98,12 @@ The ``Trainer`` provides a convenient way to train a model. case=case, data_train=data_train, data_valid=data_valid, - metrics=["mse"], - n_rollout_steps=20, + cfg_eval={"n_rollout_steps": 20, "train": {"metrics": ["mse"]}}, + input_seq_length=6 ) # Train for 25000 steps - params, state, _ = trainer(step_max=25000) + params, state, _ = trainer.train(step_max=25000) Evaluation @@ -119,7 +119,7 @@ When training is done, we can evaluate the model on the test set. data_test, params, state, - metrics=["mse", "sinkhorn", "e_kin"], + cfg_eval_infer={"metrics": ["mse", "sinkhorn", "e_kin"]}, n_rollout_steps=20, ) @@ -130,6 +130,7 @@ Contents .. toctree:: :maxdepth: 2 + pages/defaults pages/data pages/case_setup pages/models diff --git a/docs/pages/defaults.rst b/docs/pages/defaults.rst new file mode 100644 index 0000000..43a067c --- /dev/null +++ b/docs/pages/defaults.rst @@ -0,0 +1,45 @@ +Defaults +=================================== + + + +.. exec_code:: + :hide_code: + :linenos_output: + :language_output: python + :caption: LagrangeBench default values + + + with open("lagrangebench/defaults.py", "r") as file: + defaults_full = file.read() + + # parse defaults: remove imports, only keep the set_defaults function + + defaults_full = defaults_full.split("\n") + + # remove imports + defaults_full = [line for line in defaults_full if not line.startswith("import")] + defaults_full = [line for line in defaults_full if len(line.replace(" ", "")) > 0] + + # remove other functions + keep = False + defaults = [] + for i, line in enumerate(defaults_full): + if line.startswith("def"): + if "set_defaults" in line: + keep = True + else: + keep = False + + if keep: + defaults.append(line) + + # remove function declaration and return + defaults = defaults[2:-2] + + # remove indent + defaults = [line[4:] for line in defaults] + + + print("\n".join(defaults)) + \ No newline at end of file diff --git a/docs/pages/evaluate.rst b/docs/pages/evaluate.rst index af74ac0..2fc8267 100644 --- a/docs/pages/evaluate.rst +++ b/docs/pages/evaluate.rst @@ -10,3 +10,8 @@ Metrics ------- .. automodule:: lagrangebench.evaluate.metrics :members: + +Utils +----- +.. automodule:: lagrangebench.evaluate.utils + :members: diff --git a/docs/pages/train.rst b/docs/pages/train.rst index 6b962a9..8c6e0be 100644 --- a/docs/pages/train.rst +++ b/docs/pages/train.rst @@ -5,6 +5,7 @@ Trainer ------- .. automodule:: lagrangebench.train.trainer :members: + :exclude-members: __init__, __delattr__, __setattr__, __hash__, __eq__, __repr__, __weakref__ Strategies ---------- diff --git a/docs/requirements.txt b/docs/requirements.txt index a19b4eb..5bcee88 100644 --- a/docs/requirements.txt +++ b/docs/requirements.txt @@ -9,12 +9,15 @@ jax_md>=0.2.8 jmp>=0.0.4 jraph>=0.0.6.dev0 matscipy>=0.8.0 +omegaconf>=2.3.0 optax>=0.1.7 ott-jax>=0.4.2 pyvista PyYAML sphinx==7.2.6 +sphinx-exec-code sphinx-rtd-theme==1.3.0 +toml>=0.10.2 torch==2.1.0+cpu wandb wget diff --git a/experiments/config.py b/experiments/config.py deleted file mode 100644 index c7d0438..0000000 --- a/experiments/config.py +++ /dev/null @@ -1,220 +0,0 @@ -import argparse -import os -from typing import Dict - -import yaml - - -def cli_arguments() -> Dict: - parser = argparse.ArgumentParser() - group = parser.add_mutually_exclusive_group(required=True) - - # config arguments - group.add_argument("-c", "--config", type=str, help="Path to the config yaml.") - group.add_argument("--model_dir", type=str, help="Path to the model checkpoint.") - - # run arguments - parser.add_argument( - "--mode", type=str, choices=["train", "infer", "all"], help="Train or evaluate." - ) - parser.add_argument("--batch_size", type=int, required=False, help="Batch size.") - parser.add_argument( - "--lr_start", type=float, required=False, help="Starting learning rate." - ) - parser.add_argument( - "--lr_final", type=float, required=False, help="Learning rate after decay." - ) - parser.add_argument( - "--lr_decay_rate", type=float, required=False, help="Learning rate decay." - ) - parser.add_argument( - "--lr_decay_steps", type=int, required=False, help="Learning rate decay steps." - ) - parser.add_argument( - "--noise_std", - type=float, - required=False, - help="Additive noise standard deviation.", - ) - parser.add_argument( - "--test", - action=argparse.BooleanOptionalAction, - help="Run test mode instead of validation.", - ) - parser.add_argument("--seed", type=int, required=False, help="Random seed.") - parser.add_argument( - "--data_dir", type=str, help="Absolute/relative path to the dataset." - ) - parser.add_argument("--ckp_dir", type=str, help="Path for checkpoints.") - - # model arguments - parser.add_argument( - "--model", - type=str, - help="Model name.", - ) - parser.add_argument( - "--input_seq_length", - type=int, - required=False, - help="Input position sequence length.", - ) - parser.add_argument( - "--num_mp_steps", - type=int, - required=False, - help="Number of message passing layers.", - ) - parser.add_argument( - "--num_mlp_layers", type=int, required=False, help="Number of MLP layers." - ) - parser.add_argument( - "--latent_dim", type=int, required=False, help="Hidden layer dimension." - ) - parser.add_argument( - "--magnitude_features", - action=argparse.BooleanOptionalAction, - help="Whether to include velocity magnitudes in node features.", - ) - parser.add_argument( - "--isotropic_norm", - action=argparse.BooleanOptionalAction, - help="Use isotropic normalization.", - ) - - # output arguments - parser.add_argument( - "--out_type", - type=str, - required=False, - choices=["vtk", "pkl", "none"], - help="Output type to store rollouts during validation.", - ) - parser.add_argument( - "--out_type_infer", - type=str, - required=False, - choices=["vtk", "pkl", "none"], - help="Output type to store rollouts during inference.", - ) - parser.add_argument( - "--rollout_dir", type=str, required=False, help="Directory to write rollouts." - ) - - # segnn-specific arguments - parser.add_argument( - "--lmax_attributes", - type=int, - required=False, - help="Maximum degree of attributes.", - ) - parser.add_argument( - "--lmax_hidden", - type=int, - required=False, - help="Maximum degree of hidden layers.", - ) - parser.add_argument( - "--segnn_norm", - type=str, - required=False, - choices=["instance", "batch", "none"], - help="Normalisation type.", - ) - parser.add_argument( - "--velocity_aggregate", - type=str, - required=False, - choices=["avg", "sum", "last", "all"], - help="Velocity aggregation function for node attributes.", - ) - parser.add_argument( - "--attribute_mode", - type=str, - required=False, - choices=["add", "concat", "velocity"], - help="How to combine node attributes.", - ) - # HAE-specific arguments - parser.add_argument( - "--right_attribute", - required=False, - action=argparse.BooleanOptionalAction, - help="Whether to use last velocity to steer the attribute embedding.", - ) - parser.add_argument( - "--attribute_embedding_blocks", - required=False, - type=int, - help="Number of embedding layers for the attributes.", - ) - - # misc arguments - parser.add_argument( - "--gpu", type=int, required=False, help="CUDA device ID to use." - ) - parser.add_argument( - "--f64", - required=False, - action=argparse.BooleanOptionalAction, - help="Whether to use double precision.", - ) - - parser.add_argument( - "--eval_n_trajs", - required=False, - type=int, - help="Number of trajectories to evaluate during validation.", - ) - parser.add_argument( - "--eval_n_trajs_infer", - required=False, - type=int, - help="Number of trajectories to evaluate during inference.", - ) - - parser.add_argument( - "--metrics", - required=False, - nargs="+", - help="Validation metrics to evaluate. Choose from: mse, mae, sinkhorn, e_kin.", - ) - parser.add_argument( - "--metrics_infer", - required=False, - nargs="+", - help="Inference metrics to evaluate during inference.", - ) - parser.add_argument( - "--metrics_stride", - required=False, - type=int, - help="Stride for Sinkhorn and e_kin during validation", - ) - parser.add_argument( - "--metrics_stride_infer", - required=False, - type=int, - help="Stride for Sinkhorn and e_kin during inference.", - ) - parser.add_argument( - "--n_rollout_steps", - required=False, - type=int, - help="Number of rollout steps during validation/testing.", - ) - # only keep passed arguments to avoid overwriting config - return {k: v for k, v in vars(parser.parse_args()).items() if v is not None} - - -class NestedLoader(yaml.SafeLoader): - """Load yaml files with nested configs.""" - - def get_single_data(self): - parent = {} - config = super().get_single_data() - if "extends" in config and (included := config["extends"]): - del config["extends"] - with open(os.path.join("configs", included), "r") as f: - parent = yaml.load(f, NestedLoader) - return {**parent, **config} diff --git a/experiments/run.py b/experiments/run.py deleted file mode 100644 index 33494ea..0000000 --- a/experiments/run.py +++ /dev/null @@ -1,169 +0,0 @@ -import copy -import os -import os.path as osp -from argparse import Namespace -from datetime import datetime - -import haiku as hk -import jax.numpy as jnp -import jmp -import numpy as np -import wandb -import yaml - -from experiments.utils import setup_data, setup_model -from lagrangebench import Trainer, infer -from lagrangebench.case_setup import case_builder -from lagrangebench.evaluate import averaged_metrics -from lagrangebench.utils import PushforwardConfig - - -def train_or_infer(args: Namespace): - data_train, data_valid, data_test, args = setup_data(args) - - # neighbors search - bounds = np.array(data_train.metadata["bounds"]) - args.box = bounds[:, 1] - bounds[:, 0] - - args.info.len_train = len(data_train) - args.info.len_eval = len(data_valid) - - # setup core functions - case = case_builder( - box=args.box, - metadata=data_train.metadata, - input_seq_length=args.config.input_seq_length, - isotropic_norm=args.config.isotropic_norm, - noise_std=args.config.noise_std, - magnitude_features=args.config.magnitude_features, - external_force_fn=data_train.external_force_fn, - neighbor_list_backend=args.config.neighbor_list_backend, - neighbor_list_multiplier=args.config.neighbor_list_multiplier, - dtype=(jnp.float64 if args.config.f64 else jnp.float32), - ) - - _, particle_type = data_train[0] - - args.info.homogeneous_particles = particle_type.max() == particle_type.min() - args.metadata = data_train.metadata - args.normalization_stats = case.normalization_stats - args.config.has_external_force = data_train.external_force_fn is not None - - # setup model from configs - model, MODEL = setup_model(args) - model = hk.without_apply_rng(hk.transform_with_state(model)) - - # mixed precision training based on this reference: - # https://github.com/deepmind/dm-haiku/blob/main/examples/imagenet/train.py - policy = jmp.get_policy("params=float32,compute=float32,output=float32") - hk.mixed_precision.set_policy(MODEL, policy) - - if args.config.mode == "train" or args.config.mode == "all": - print("Start training...") - # save config file - run_prefix = f"{args.config.model}_{data_train.name}" - data_and_time = datetime.today().strftime("%Y%m%d-%H%M%S") - args.info.run_name = f"{run_prefix}_{data_and_time}" - - args.config.new_checkpoint = os.path.join( - args.config.ckp_dir, args.info.run_name - ) - os.makedirs(args.config.new_checkpoint, exist_ok=True) - os.makedirs(os.path.join(args.config.new_checkpoint, "best"), exist_ok=True) - with open(os.path.join(args.config.new_checkpoint, "config.yaml"), "w") as f: - yaml.dump(vars(args.config), f) - with open( - os.path.join(args.config.new_checkpoint, "best", "config.yaml"), "w" - ) as f: - yaml.dump(vars(args.config), f) - - if args.config.wandb: - # wandb doesn't like Namespace objects - args_dict = copy.copy(args) - args_dict.config = vars(args.config) - args_dict.info = vars(args.info) - - wandb_run = wandb.init( - project=args.config.wandb_project, - entity=args.config.wandb_entity, - name=args.info.run_name, - config=args_dict, - save_code=True, - ) - else: - wandb_run = None - - pf_config = PushforwardConfig( - steps=args.config.pushforward["steps"], - unrolls=args.config.pushforward["unrolls"], - probs=args.config.pushforward["probs"], - ) - - trainer = Trainer( - model, - case, - data_train, - data_valid, - pushforward=pf_config, - metrics=args.config.metrics, - seed=args.config.seed, - batch_size=args.config.batch_size, - input_seq_length=args.config.input_seq_length, - noise_std=args.config.noise_std, - lr_start=args.config.lr_start, - lr_final=args.config.lr_final, - lr_decay_steps=args.config.lr_decay_steps, - lr_decay_rate=args.config.lr_decay_rate, - loss_weight=args.config.loss_weight, - n_rollout_steps=args.config.n_rollout_steps, - eval_n_trajs=args.config.eval_n_trajs, - rollout_dir=args.config.rollout_dir, - out_type=args.config.out_type, - log_steps=args.config.log_steps, - eval_steps=args.config.eval_steps, - metrics_stride=args.config.metrics_stride, - num_workers=args.config.num_workers, - batch_size_infer=args.config.batch_size_infer, - ) - _, _, _ = trainer( - step_max=args.config.step_max, - load_checkpoint=args.config.model_dir, - store_checkpoint=args.config.new_checkpoint, - wandb_run=wandb_run, - ) - - if args.config.wandb: - wandb.finish() - - if args.config.mode == "infer" or args.config.mode == "all": - print("Start inference...") - if args.config.mode == "all": - args.config.model_dir = os.path.join(args.config.new_checkpoint, "best") - assert osp.isfile(os.path.join(args.config.model_dir, "params_tree.pkl")) - - args.config.rollout_dir = args.config.model_dir.replace("ckp", "rollout") - os.makedirs(args.config.rollout_dir, exist_ok=True) - - if args.config.eval_n_trajs_infer is None: - args.config.eval_n_trajs_infer = args.config.eval_n_trajs - - assert args.config.model_dir, "model_dir must be specified for inference." - metrics = infer( - model, - case, - data_test if args.config.test else data_valid, - load_checkpoint=args.config.model_dir, - metrics=args.config.metrics_infer, - rollout_dir=args.config.rollout_dir, - eval_n_trajs=args.config.eval_n_trajs_infer, - n_rollout_steps=args.config.n_rollout_steps, - out_type=args.config.out_type_infer, - n_extrap_steps=args.config.n_extrap_steps, - seed=args.config.seed, - metrics_stride=args.config.metrics_stride_infer, - batch_size=args.config.batch_size_infer, - ) - - split = "test" if args.config.test else "valid" - print(f"Metrics of {args.config.model_dir} on {split} split:") - print(averaged_metrics(metrics)) diff --git a/experiments/utils.py b/experiments/utils.py deleted file mode 100644 index 8168178..0000000 --- a/experiments/utils.py +++ /dev/null @@ -1,156 +0,0 @@ -import os -import os.path as osp -from argparse import Namespace -from typing import Callable, Tuple, Type - -import jax -import jax.numpy as jnp -from e3nn_jax import Irreps -from jax_md import space - -from lagrangebench import models -from lagrangebench.data import H5Dataset -from lagrangebench.models.utils import node_irreps -from lagrangebench.utils import NodeType - - -def setup_data(args: Namespace) -> Tuple[H5Dataset, H5Dataset, Namespace]: - if not osp.isabs(args.config.data_dir): - args.config.data_dir = osp.join(os.getcwd(), args.config.data_dir) - - args.info.dataset_name = osp.basename(args.config.data_dir.split("/")[-1]) - if args.config.ckp_dir is not None: - os.makedirs(args.config.ckp_dir, exist_ok=True) - if args.config.rollout_dir is not None: - os.makedirs(args.config.rollout_dir, exist_ok=True) - - # dataloader - data_train = H5Dataset( - "train", - dataset_path=args.config.data_dir, - input_seq_length=args.config.input_seq_length, - extra_seq_length=args.config.pushforward["unrolls"][-1], - nl_backend=args.config.neighbor_list_backend, - ) - data_valid = H5Dataset( - "valid", - dataset_path=args.config.data_dir, - input_seq_length=args.config.input_seq_length, - extra_seq_length=args.config.n_rollout_steps, - nl_backend=args.config.neighbor_list_backend, - ) - data_test = H5Dataset( - "test", - dataset_path=args.config.data_dir, - input_seq_length=args.config.input_seq_length, - extra_seq_length=args.config.n_rollout_steps, - nl_backend=args.config.neighbor_list_backend, - ) - if args.config.eval_n_trajs == -1: - args.config.eval_n_trajs = data_valid.num_samples - if args.config.eval_n_trajs_infer == -1: - args.config.eval_n_trajs_infer = data_valid.num_samples - assert data_valid.num_samples >= args.config.eval_n_trajs, ( - f"Number of available evaluation trajectories ({data_valid.num_samples}) " - f"exceeds eval_n_trajs ({args.config.eval_n_trajs})" - ) - - args.info.has_external_force = bool(data_train.external_force_fn is not None) - - return data_train, data_valid, data_test, args - - -def setup_model(args: Namespace) -> Tuple[Callable, Type]: - """Setup model based on args.""" - model_name = args.config.model.lower() - metadata = args.metadata - - if model_name == "gns": - - def model_fn(x): - return models.GNS( - particle_dimension=metadata["dim"], - latent_size=args.config.latent_dim, - blocks_per_step=args.config.num_mlp_layers, - num_mp_steps=args.config.num_mp_steps, - num_particle_types=NodeType.SIZE, - particle_type_embedding_size=16, - )(x) - - MODEL = models.GNS - elif model_name == "segnn": - # Hx1o vel, Hx0e vel, 2x1o boundary, 9x0e type - node_feature_irreps = node_irreps( - metadata, - args.config.input_seq_length, - args.config.has_external_force, - args.config.magnitude_features, - args.info.homogeneous_particles, - ) - # 1o displacement, 0e distance - edge_feature_irreps = Irreps("1x1o + 1x0e") - - def model_fn(x): - return models.SEGNN( - node_features_irreps=node_feature_irreps, - edge_features_irreps=edge_feature_irreps, - scalar_units=args.config.latent_dim, - lmax_hidden=args.config.lmax_hidden, - lmax_attributes=args.config.lmax_attributes, - output_irreps=Irreps("1x1o"), - num_mp_steps=args.config.num_mp_steps, - n_vels=args.config.input_seq_length - 1, - velocity_aggregate=args.config.velocity_aggregate, - homogeneous_particles=args.info.homogeneous_particles, - blocks_per_step=args.config.num_mlp_layers, - norm=args.config.segnn_norm, - )(x) - - MODEL = models.SEGNN - elif model_name == "egnn": - box = args.box - if jnp.array(metadata["periodic_boundary_conditions"]).any(): - displacement_fn, shift_fn = space.periodic(jnp.array(box)) - else: - displacement_fn, shift_fn = space.free() - - displacement_fn = jax.vmap(displacement_fn, in_axes=(0, 0)) - shift_fn = jax.vmap(shift_fn, in_axes=(0, 0)) - - def model_fn(x): - return models.EGNN( - hidden_size=args.config.latent_dim, - output_size=1, - dt=metadata["dt"] * metadata["write_every"], - displacement_fn=displacement_fn, - shift_fn=shift_fn, - normalization_stats=args.normalization_stats, - num_mp_steps=args.config.num_mp_steps, - n_vels=args.config.input_seq_length - 1, - residual=True, - )(x) - - MODEL = models.EGNN - elif model_name == "painn": - assert args.config.magnitude_features, "PaiNN requires magnitudes" - radius = metadata["default_connectivity_radius"] * 1.5 - - def model_fn(x): - return models.PaiNN( - hidden_size=args.config.latent_dim, - output_size=1, - n_vels=args.config.input_seq_length - 1, - radial_basis_fn=models.painn.gaussian_rbf(20, radius, trainable=True), - cutoff_fn=models.painn.cosine_cutoff(radius), - num_mp_steps=args.config.num_mp_steps, - )(x) - - MODEL = models.PaiNN - elif model_name == "linear": - - def model_fn(x): - return models.Linear(dim_out=metadata["dim"])(x) - - MODEL = models.Linear - - return model_fn, MODEL diff --git a/lagrangebench/__init__.py b/lagrangebench/__init__.py index 39cf0eb..f9dddcc 100644 --- a/lagrangebench/__init__.py +++ b/lagrangebench/__init__.py @@ -3,16 +3,17 @@ from .evaluate import infer from .models import EGNN, GNS, SEGNN, PaiNN from .train.trainer import Trainer -from .utils import PushforwardConfig __all__ = [ "Trainer", "infer", "case_builder", + "models", "GNS", "EGNN", "SEGNN", "PaiNN", + "data", "H5Dataset", "TGV2D", "TGV3D", @@ -21,7 +22,6 @@ "LDC2D", "LDC3D", "DAM2D", - "PushforwardConfig", ] -__version__ = "0.0.1" +__version__ = "0.1.2" diff --git a/lagrangebench/case_setup/case.py b/lagrangebench/case_setup/case.py index 21ee2ec..a9a6ad6 100644 --- a/lagrangebench/case_setup/case.py +++ b/lagrangebench/case_setup/case.py @@ -8,6 +8,7 @@ from jax_md import space from jax_md.dataclasses import dataclass, static_field from jax_md.partition import NeighborList, NeighborListFormat +from omegaconf import DictConfig, OmegaConf from lagrangebench.data.utils import get_dataset_stats from lagrangebench.defaults import defaults @@ -63,12 +64,10 @@ def case_builder( box: Tuple[float, float, float], metadata: Dict, input_seq_length: int, - isotropic_norm: bool = defaults.isotropic_norm, - noise_std: float = defaults.noise_std, + cfg_neighbors: Union[Dict, DictConfig] = defaults.neighbors, + cfg_model: Union[Dict, DictConfig] = defaults.model, + noise_std: float = defaults.train.noise_std, external_force_fn: Optional[Callable] = None, - magnitude_features: bool = defaults.magnitude_features, - neighbor_list_backend: str = defaults.neighbor_list_backend, - neighbor_list_multiplier: float = defaults.neighbor_list_multiplier, dtype: jnp.dtype = defaults.dtype, ): """Set up a CaseSetupFn that contains every required function besides the model. @@ -84,15 +83,24 @@ def case_builder( box: Box xyz sizes of the system. metadata: Dataset metadata dictionary. input_seq_length: Length of the input sequence. - isotropic_norm: Whether to normalize dimensions equally. + cfg_neighbors: Configuration dictionary for the neighbor list. + cfg_model: Configuration dictionary for the model / feature builder. noise_std: Noise standard deviation. external_force_fn: External force function. - magnitude_features: Whether to add velocity magnitudes in the features. - neighbor_list_backend: Backend of the neighbor list. - neighbor_list_multiplier: Capacity multiplier of the neighbor list. dtype: Data type. """ - normalization_stats = get_dataset_stats(metadata, isotropic_norm, noise_std) + if isinstance(cfg_neighbors, Dict): + cfg_neighbors = OmegaConf.create(cfg_neighbors) + if isinstance(cfg_model, Dict): + cfg_model = OmegaConf.create(cfg_model) + + # if one of the cfg_* arguments has a subset of the default configs, merge them + cfg_neighbors = OmegaConf.merge(defaults.neighbors, cfg_neighbors) + cfg_model = OmegaConf.merge(defaults.model, cfg_model) + + normalization_stats = get_dataset_stats( + metadata, cfg_model.isotropic_norm, noise_std + ) # apply PBC in all directions or not at all if jnp.array(metadata["periodic_boundary_conditions"]).any(): @@ -102,9 +110,9 @@ def case_builder( displacement_fn_set = vmap(displacement_fn, in_axes=(0, 0)) - if neighbor_list_multiplier < 1.25: + if cfg_neighbors.multiplier < 1.25: warnings.warn( - f"neighbor_list_multiplier={neighbor_list_multiplier} < 1.25 is very low. " + f"cfg_neighbors.multiplier={cfg_neighbors.multiplier} < 1.25 is very low. " "Be especially cautious if you batch training and/or inference as " "reallocation might be necessary based on different overflow conditions. " "See https://github.com/tumaer/lagrangebench/pull/20#discussion_r1443811262" @@ -113,9 +121,9 @@ def case_builder( neighbor_fn = neighbor_list( displacement_fn, jnp.array(box), - backend=neighbor_list_backend, + backend=cfg_neighbors.backend, r_cutoff=metadata["default_connectivity_radius"], - capacity_multiplier=neighbor_list_multiplier, + capacity_multiplier=cfg_neighbors.multiplier, mask_self=False, format=NeighborListFormat.Sparse, num_particles_max=metadata["num_particles_max"], @@ -128,7 +136,7 @@ def case_builder( connectivity_radius=metadata["default_connectivity_radius"], displacement_fn=displacement_fn, pbc=metadata["periodic_boundary_conditions"], - magnitude_features=magnitude_features, + magnitude_features=cfg_model.magnitude_features, external_force_fn=external_force_fn, ) diff --git a/lagrangebench/data/data.py b/lagrangebench/data/data.py index 6edbab9..1513d28 100644 --- a/lagrangebench/data/data.py +++ b/lagrangebench/data/data.py @@ -6,6 +6,7 @@ import os import os.path as osp import re +import warnings import zipfile from typing import Optional @@ -17,7 +18,7 @@ from lagrangebench.utils import NodeType -ZENODO_PREFIX="https://zenodo.org/records/10491868/files/" +ZENODO_PREFIX = "https://zenodo.org/records/10491868/files/" URLS = { "tgv2d": f"{ZENODO_PREFIX}2D_TGV_2500_10kevery100.zip", "rpf2d": f"{ZENODO_PREFIX}2D_RPF_3200_20kevery100.zip", @@ -64,8 +65,7 @@ def __init__( nl_backend: Which backend to use for the neighbor list """ - if dataset_path.endswith("/"): # remove trailing slash in dataset path - dataset_path = dataset_path[:-1] + dataset_path = osp.normpath(dataset_path) # remove potential trailing slash if name is None: self.name = get_dataset_name_from_path(dataset_path) @@ -266,21 +266,29 @@ def __len__(self): def get_dataset_name_from_path(path: str) -> str: """Infer the dataset name from the provided path. - This function assumes that the dataset directory name has the following structure: - {2D|3D}_{TGV|RPF|LDC|DAM}_{num_particles_max}_{num_steps}every{sampling_rate} - - The dataset name then becomes one of the following: - {tgv2d|tgv3d|rpf2d|rpf3d|ldc2d|ldc3d|dam2d} + Variant 1: + If the dataset directory contains {2|3}D_{ABC}, then the name is inferred as + {abc2d|abc3d}. These names are based on the lagrangebench dataset directories: + {2D|3D}_{TGV|RPF|LDC|DAM}_{num_particles_max}_{num_steps}every{sampling_rate} + The shorter dataset names then become one of the following: + {tgv2d|tgv3d|rpf2d|rpf3d|ldc2d|ldc3d|dam2d} + Variant 2: + If the condition {2|3}D_{ABC} is not met, the name is the dataset directory """ - name = re.search(r"(?:2D|3D)_[A-Z]{3}", path) - assert name is not None, ( - f"No valid dataset name found in path {path}. " - "Valid name formats: {2D|3D}_{TGV|RPF|LDC|DAM} " - "Alternatively, you can specify the dataset name explicitly." - ) - name = name.group(0) - name = f"{name.split('_')[1]}{name.split('_')[0]}".lower() + dir = osp.basename(osp.normpath(path)) + name = re.search(r"(?:2D|3D)_[A-Z]{3}", dir) + + if name is not None: # lagrangebench convention used + name = name.group(0) + name = f"{name.split('_')[1]}{name.split('_')[0]}".lower() + else: + warnings.warn( + f"Dataset directory {dir} does not follow the lagrangebench convention. " + "Valid name formats: {2D|3D}_{TGV|RPF|LDC|DAM}. Alternatively, you can " + "specify the dataset name explicitly." + ) + name = dir return name diff --git a/lagrangebench/defaults.py b/lagrangebench/defaults.py index 9cb3c22..c967ff8 100644 --- a/lagrangebench/defaults.py +++ b/lagrangebench/defaults.py @@ -1,70 +1,198 @@ -"""Default lagrangebench values.""" - -from dataclasses import dataclass - -import jax.numpy as jnp - - -@dataclass(frozen=True) -class defaults: - """ - Default lagrangebench values. - - Attributes: - seed: random seed. Default 0. - batch_size: batch size. Default 1. - step_max: max number of training steps. Default ``1e7``. - dtype: data type. Default ``jnp.float32``. - magnitude_features: whether to include velocity magnitudes. Default False. - isotropic_norm: whether to normalize dimensions equally. Default False. - lr_start: initial learning rate. Default 1e-4. - lr_final: final learning rate (after exponential decay). Default 1e-6. - lr_decay_steps: number of steps to decay learning rate - lr_decay_rate: learning rate decay rate. Default 0.1. - noise_std: standard deviation of the GNS-style noise. Default 1e-4. - input_seq_length: number of input steps. Default 6. - n_rollout_steps: number of eval rollout steps. -1 is full rollout. Default -1. - eval_n_trajs: number of trajectories to evaluate. Default 1 trajectory. - rollout_dir: directory to save rollouts. Default None. - out_type: type of output. None means no rollout is stored. Default None. - n_extrap_steps: number of extrapolation steps. Default 0. - log_steps: number of steps between logs. Default 1000. - eval_steps: number of steps between evaluations and checkpoints. Default 5000. - neighbor_list_backend: neighbor list routine. Default "jaxmd_vmap". - neighbor_list_multiplier: multiplier for neighbor list capacity. Default 1.25. - """ - - # training - seed: int = 0 # random seed - batch_size: int = 1 # batch size - step_max: int = 5e5 # max number of training steps - dtype: jnp.dtype = jnp.float64 # data type for preprocessing - magnitude_features: bool = False # whether to include velocity magnitude features - isotropic_norm: bool = False # whether to normalize dimensions equally - num_workers: int = 4 # number of workers for data loading - - # learning rate - lr_start: float = 1e-4 # initial learning rate - lr_final: float = 1e-6 # final learning rate (after exponential decay) - lr_decay_steps: int = 1e5 # number of steps to decay learning rate - lr_decay_rate: float = 0.1 # learning rate decay rate - - noise_std: float = 3e-4 # standard deviation of the GNS-style noise - - # evaluation - input_seq_length: int = 6 # number of input steps - n_rollout_steps: int = -1 # number of eval rollout steps. -1 is full rollout - eval_n_trajs: int = 1 # number of trajectories to evaluate - rollout_dir: str = None # directory to save rollouts - out_type: str = "none" # type of output. None means no rollout is stored - n_extrap_steps: int = 0 # number of extrapolation steps - metrics_stride: int = 10 # stride for e_kin and sinkhorn - batch_size_infer: int = 2 # batch size for validation/testing - - # logging - log_steps: int = 1000 # number of steps between logs - eval_steps: int = 10000 # number of steps between evaluations and checkpoints - - # neighbor list - neighbor_list_backend: str = "jaxmd_vmap" # backend for neighbor list computation - neighbor_list_multiplier: float = 1.25 # multiplier for neighbor list capacity +"""Default lagrangebench configs.""" + + +from omegaconf import DictConfig, OmegaConf + + +def set_defaults(cfg: DictConfig = OmegaConf.create({})) -> DictConfig: + """Set default lagrangebench configs.""" + + ### global and hardware-related configs + + # configuration file. Either "config" or "load_ckp" must be specified. + # If "config" is specified, "load_ckp" is ignored. + cfg.config = None + # Load checkpointed model from this directory + cfg.load_ckp = None + # One of "train", "infer" or "all" (= both) + cfg.mode = "all" + # path to data directory + cfg.dataset_path = None + # random seed + cfg.seed = 0 + # data type for preprocessing. One of "float32" or "float64" + cfg.dtype = "float64" + # gpu device. -1 for CPU. Should be specified before importing the library. + cfg.gpu = None + # XLA memory fraction to be preallocated. The JAX default is 0.75. + # Should be specified before importing the library. + cfg.xla_mem_fraction = None + + ### model + cfg.model = OmegaConf.create({}) + + # model architecture name. gns, segnn, egnn + cfg.model.name = None + # Length of the position input sequence + cfg.model.input_seq_length = 6 + # Number of message passing steps + cfg.model.num_mp_steps = 10 + # Number of MLP layers + cfg.model.num_mlp_layers = 2 + # Hidden dimension + cfg.model.latent_dim = 128 + # whether to include velocity magnitude features + cfg.model.magnitude_features = False + # whether to normalize dimensions equally + cfg.model.isotropic_norm = False + + # SEGNN only parameters + # steerable attributes level + cfg.model.lmax_attributes = 1 + # Level of the hidden layer + cfg.model.lmax_hidden = 1 + # SEGNN normalization. instance, batch, none + cfg.model.segnn_norm = "none" + # SEGNN velocity aggregation. avg or last + cfg.model.velocity_aggregate = "avg" + + ### training + cfg.train = OmegaConf.create({}) + + # batch size + cfg.train.batch_size = 1 + # max number of training steps + cfg.train.step_max = 500_000 + # number of workers for data loading + cfg.train.num_workers = 4 + # standard deviation of the GNS-style noise + cfg.train.noise_std = 3.0e-4 + + # optimizer + cfg.train.optimizer = OmegaConf.create({}) + + # initial learning rate + cfg.train.optimizer.lr_start = 1.0e-4 + # final learning rate (after exponential decay) + cfg.train.optimizer.lr_final = 1.0e-6 + # learning rate decay rate + cfg.train.optimizer.lr_decay_rate = 0.1 + # number of steps to decay learning rate + cfg.train.optimizer.lr_decay_steps = 1.0e5 + + # pushforward + cfg.train.pushforward = OmegaConf.create({}) + + # At which training step to introduce next unroll stage + cfg.train.pushforward.steps = [-1, 20000, 300000, 400000] + # For how many steps to unroll + cfg.train.pushforward.unrolls = [0, 1, 2, 3] + # Which probability ratio to keep between the unrolls + cfg.train.pushforward.probs = [18, 2, 1, 1] + + # loss weights + cfg.train.loss_weight = OmegaConf.create({}) + + # weight for acceleration error + cfg.train.loss_weight.acc = 1.0 + # weight for velocity error + cfg.train.loss_weight.vel = 0.0 + # weight for position error + cfg.train.loss_weight.pos = 0.0 + + ### evaluation + cfg.eval = OmegaConf.create({}) + + # number of eval rollout steps. -1 is full rollout + cfg.eval.n_rollout_steps = 20 + # whether to use the test or valid split + cfg.eval.test = False + # rollouts directory + cfg.eval.rollout_dir = None + + # configs for validation during training + cfg.eval.train = OmegaConf.create({}) + + # number of trajectories to evaluate + cfg.eval.train.n_trajs = 50 + # stride for e_kin and sinkhorn + cfg.eval.train.metrics_stride = 10 + # batch size + cfg.eval.train.batch_size = 1 + # metrics to evaluate + cfg.eval.train.metrics = ["mse"] + # write validation rollouts. One of "none", "vtk", or "pkl" + cfg.eval.train.out_type = "none" + + # configs for inference/testing + cfg.eval.infer = OmegaConf.create({}) + + # number of trajectories to evaluate during inference + cfg.eval.infer.n_trajs = -1 + # stride for e_kin and sinkhorn + cfg.eval.infer.metrics_stride = 1 + # batch size + cfg.eval.infer.batch_size = 2 + # metrics for inference + cfg.eval.infer.metrics = ["mse", "e_kin", "sinkhorn"] + # write inference rollouts. One of "none", "vtk", or "pkl" + cfg.eval.infer.out_type = "pkl" + + # number of extrapolation steps during inference + cfg.eval.infer.n_extrap_steps = 0 + + ### logging + cfg.logging = OmegaConf.create({}) + + # number of steps between loggings + cfg.logging.log_steps = 1000 + # number of steps between evaluations and checkpoints + cfg.logging.eval_steps = 10000 + # wandb enable + cfg.logging.wandb = False + # wandb project name + cfg.logging.wandb_project = None + # wandb entity name + cfg.logging.wandb_entity = "lagrangebench" + # checkpoint directory + cfg.logging.ckp_dir = "ckp" + # name of training run + cfg.logging.run_name = None + + ### neighbor list + cfg.neighbors = OmegaConf.create({}) + + # backend for neighbor list computation + cfg.neighbors.backend = "jaxmd_vmap" + # multiplier for neighbor list capacity + cfg.neighbors.multiplier = 1.25 + + return cfg + + +defaults = set_defaults() + + +def check_cfg(cfg: DictConfig): + """Check if the configs are valid.""" + + assert cfg.mode in ["train", "infer", "all"] + assert cfg.dtype in ["float32", "float64"] + assert cfg.dataset_path is not None, "dataset_path must be specified." + + assert cfg.model.input_seq_length >= 2, "At least two positions for one past vel." + + pf = cfg.train.pushforward + assert len(pf.steps) == len(pf.unrolls) == len(pf.probs) + assert all([s >= 0 for s in pf.unrolls]), "All unrolls must be non-negative." + assert all([s >= 0 for s in pf.probs]), "All probabilities must be non-negative." + lwv = cfg.train.loss_weight.values() + assert all([w >= 0 for w in lwv]), "All loss weights must be non-negative." + assert sum(lwv) > 0, "At least one loss weight must be non-zero." + + assert cfg.eval.train.n_trajs >= -1 + assert cfg.eval.infer.n_trajs >= -1 + assert set(cfg.eval.train.metrics).issubset(["mse", "e_kin", "sinkhorn"]) + assert set(cfg.eval.infer.metrics).issubset(["mse", "e_kin", "sinkhorn"]) + assert cfg.eval.train.out_type in ["none", "vtk", "pkl"] + assert cfg.eval.infer.out_type in ["none", "vtk", "pkl"] diff --git a/lagrangebench/evaluate/rollout.py b/lagrangebench/evaluate/rollout.py index dde7627..341b1ab 100644 --- a/lagrangebench/evaluate/rollout.py +++ b/lagrangebench/evaluate/rollout.py @@ -4,13 +4,14 @@ import pickle import time from functools import partial -from typing import Callable, Iterable, List, Optional, Tuple +from typing import Callable, Dict, Iterable, Optional, Tuple, Union import haiku as hk import jax import jax.numpy as jnp import jax_md.partition as partition from jax import jit, vmap +from omegaconf import DictConfig, OmegaConf from torch.utils.data import DataLoader from lagrangebench.data import H5Dataset @@ -74,7 +75,7 @@ def _forward_eval( return current_positions, state -def eval_batched_rollout( +def _eval_batched_rollout( forward_eval_vmap: Callable, preprocess_eval_vmap: Callable, case, @@ -237,7 +238,7 @@ def eval_rollout( # (pos_input_batch, particle_type_batch) = traj_batch_i # pos_input_batch.shape = (batch, num_particles, seq_length, dim) - example_rollout_batch, metrics_batch, neighbors = eval_batched_rollout( + example_rollout_batch, metrics_batch, neighbors = _eval_batched_rollout( forward_eval_vmap=forward_eval_vmap, preprocess_eval_vmap=preprocess_eval_vmap, case=case, @@ -289,7 +290,7 @@ def eval_rollout( "tag": example_rollout["particle_type"], } write_vtk(ref_state_vtk, f"{file_prefix}_ref_{k}.vtk") - if out_type == "pkl": + elif out_type == "pkl": filename = f"{file_prefix}.pkl" with open(filename, "wb") as f: @@ -313,16 +314,11 @@ def infer( data_test: H5Dataset, params: Optional[hk.Params] = None, state: Optional[hk.State] = None, - load_checkpoint: Optional[str] = None, - metrics: List = ["mse"], - rollout_dir: Optional[str] = None, - eval_n_trajs: int = defaults.eval_n_trajs, - n_rollout_steps: int = defaults.n_rollout_steps, - out_type: str = defaults.out_type, - n_extrap_steps: int = defaults.n_extrap_steps, + load_ckp: Optional[str] = None, + cfg_eval_infer: Union[Dict, DictConfig] = defaults.eval.infer, + rollout_dir: Optional[str] = defaults.eval.rollout_dir, + n_rollout_steps: int = defaults.eval.n_rollout_steps, seed: int = defaults.seed, - metrics_stride: int = defaults.metrics_stride, - batch_size: int = defaults.batch_size_infer, ): """ Infer on a dataset, compute metrics and optionally save rollout in out_type format. @@ -333,45 +329,50 @@ def infer( data_test: Test dataset. params: Haiku params. state: Haiku state. - load_checkpoint: Path to checkpoint directory. - metrics: Metrics to compute. + load_ckp: Path to checkpoint directory. rollout_dir: Path to rollout directory. - eval_n_trajs: Number of trajectories to evaluate. + cfg_eval_infer: Evaluation configuration for inference mode. n_rollout_steps: Number of rollout steps. - out_type: Output type. Either "none", "vtk" or "pkl". - n_extrap_steps: Number of extrapolation steps. seed: Seed. - metrics_stride: Stride for e_kin and sinkhorn. - batch_size: Batch size for inference. Returns: eval_metrics: Metrics per trajectory. """ assert ( - params is not None or load_checkpoint is not None - ), "Either params or a load_checkpoint directory must be provided for inference." + params is not None or load_ckp is not None + ), "Either params or a load_ckp directory must be provided for inference." + + if isinstance(cfg_eval_infer, Dict): + cfg_eval_infer = OmegaConf.create(cfg_eval_infer) + + # if one of the cfg_* arguments has a subset of the default configs, merge them + cfg_eval_infer = OmegaConf.merge(defaults.eval.infer, cfg_eval_infer) + + n_trajs = cfg_eval_infer.n_trajs + if n_trajs == -1: + n_trajs = data_test.num_samples if params is not None: if state is None: state = {} else: - params, state, _, _ = load_haiku(load_checkpoint) + params, state, _, _ = load_haiku(load_ckp) key, seed_worker, generator = set_seed(seed) loader_test = DataLoader( dataset=data_test, - batch_size=batch_size, + batch_size=cfg_eval_infer.batch_size, collate_fn=numpy_collate, worker_init_fn=seed_worker, generator=generator, ) metrics_computer = MetricsComputer( - metrics, + cfg_eval_infer.metrics, dist_fn=case.displacement, metadata=data_test.metadata, input_seq_length=data_test.input_seq_length, - stride=metrics_stride, + stride=cfg_eval_infer.metrics_stride, ) # Precompile model model_apply = jit(model.apply) @@ -390,9 +391,9 @@ def infer( neighbors=neighbors, loader_eval=loader_test, n_rollout_steps=n_rollout_steps, - n_trajs=eval_n_trajs, + n_trajs=n_trajs, rollout_dir=rollout_dir, - out_type=out_type, - n_extrap_steps=n_extrap_steps, + out_type=cfg_eval_infer.out_type, + n_extrap_steps=cfg_eval_infer.n_extrap_steps, ) return eval_metrics diff --git a/lagrangebench/models/egnn.py b/lagrangebench/models/egnn.py index b98ed7f..3af9088 100644 --- a/lagrangebench/models/egnn.py +++ b/lagrangebench/models/egnn.py @@ -300,7 +300,7 @@ def __init__( self._tanh = tanh # integrator - self._dt = dt / num_mp_steps + self._dt = dt / self._num_mp_steps self._displacement_fn = displacement_fn self._shift_fn = shift_fn if normalization_stats is None: diff --git a/lagrangebench/models/painn.py b/lagrangebench/models/painn.py index 0447361..de83f98 100644 --- a/lagrangebench/models/painn.py +++ b/lagrangebench/models/painn.py @@ -408,27 +408,27 @@ def __init__( self.radial_basis_fn = radial_basis_fn self.cutoff_fn = cutoff_fn - self.scalar_emb = LinearXav(hidden_size, name="scalar_embedding") + self.scalar_emb = LinearXav(self._hidden_size, name="scalar_embedding") # mix vector channels (only used if vector features are present in input) self.vector_emb = LinearXav( - hidden_size, with_bias=False, name="vector_embedding" + self._hidden_size, with_bias=False, name="vector_embedding" ) if shared_filters: - self.filter_net = LinearXav(3 * hidden_size, name="filter_net") + self.filter_net = LinearXav(3 * self._hidden_size, name="filter_net") else: self.filter_net = LinearXav( - num_mp_steps * 3 * hidden_size, name="filter_net" + self._num_mp_steps * 3 * self._hidden_size, name="filter_net" ) if self._shared_interactions: self.layers = [ - PaiNNLayer(hidden_size, 0, activation, eps=eps) - ] * num_mp_steps + PaiNNLayer(self._hidden_size, 0, activation, eps=eps) + ] * self._num_mp_steps else: self.layers = [ - PaiNNLayer(hidden_size, i, activation, eps=eps) - for i in range(num_mp_steps) + PaiNNLayer(self._hidden_size, i, activation, eps=eps) + for i in range(self._num_mp_steps) ] self._readout = PaiNNReadout(self._hidden_size, out_channels=output_size) diff --git a/lagrangebench/runner.py b/lagrangebench/runner.py new file mode 100644 index 0000000..7eefebd --- /dev/null +++ b/lagrangebench/runner.py @@ -0,0 +1,289 @@ +import os +import os.path as osp +from argparse import Namespace +from datetime import datetime +from typing import Callable, Dict, Optional, Tuple, Type, Union + +import haiku as hk +import jax +import jax.numpy as jnp +import jmp +import numpy as np +from e3nn_jax import Irreps +from jax import config +from jax_md import space +from omegaconf import DictConfig, OmegaConf + +from lagrangebench import Trainer, infer, models +from lagrangebench.case_setup import case_builder +from lagrangebench.data import H5Dataset +from lagrangebench.defaults import check_cfg +from lagrangebench.evaluate import averaged_metrics +from lagrangebench.models.utils import node_irreps +from lagrangebench.utils import NodeType + + +def train_or_infer(cfg: Union[Dict, DictConfig]): + if isinstance(cfg, Dict): + cfg = OmegaConf.create(cfg) + # sanity check on the passed configs + check_cfg(cfg) + + mode = cfg.mode + load_ckp = cfg.load_ckp + is_test = cfg.eval.test + + if cfg.dtype == "float64": + config.update("jax_enable_x64", True) + + data_train, data_valid, data_test = setup_data(cfg) + + metadata = data_train.metadata + # neighbors search + bounds = np.array(metadata["bounds"]) + box = bounds[:, 1] - bounds[:, 0] + + # setup core functions + case = case_builder( + box=box, + metadata=metadata, + input_seq_length=cfg.model.input_seq_length, + cfg_neighbors=cfg.neighbors, + cfg_model=cfg.model, + noise_std=cfg.train.noise_std, + external_force_fn=data_train.external_force_fn, + dtype=cfg.dtype, + ) + + _, particle_type = data_train[0] + + # setup model from configs + model, MODEL = setup_model( + cfg, + metadata=metadata, + homogeneous_particles=particle_type.max() == particle_type.min(), + has_external_force=data_train.external_force_fn is not None, + normalization_stats=case.normalization_stats, + ) + model = hk.without_apply_rng(hk.transform_with_state(model)) + + # mixed precision training based on this reference: + # https://github.com/deepmind/dm-haiku/blob/main/examples/imagenet/train.py + policy = jmp.get_policy("params=float32,compute=float32,output=float32") + hk.mixed_precision.set_policy(MODEL, policy) + + if mode == "train" or mode == "all": + print("Start training...") + + if cfg.logging.run_name is None: + run_prefix = f"{cfg.model.name}_{data_train.name}" + data_and_time = datetime.today().strftime("%Y%m%d-%H%M%S") + cfg.logging.run_name = f"{run_prefix}_{data_and_time}" + + store_ckp = os.path.join(cfg.logging.ckp_dir, cfg.logging.run_name) + os.makedirs(store_ckp, exist_ok=True) + os.makedirs(os.path.join(store_ckp, "best"), exist_ok=True) + with open(os.path.join(store_ckp, "config.yaml"), "w") as f: + OmegaConf.save(config=cfg, f=f.name) + with open(os.path.join(store_ckp, "best", "config.yaml"), "w") as f: + OmegaConf.save(config=cfg, f=f.name) + + # dictionary of configs which will be stored on W&B + wandb_config = OmegaConf.to_container(cfg) + + trainer = Trainer( + model, + case, + data_train, + data_valid, + cfg.train, + cfg.eval, + cfg.logging, + input_seq_length=cfg.model.input_seq_length, + seed=cfg.seed, + ) + + _, _, _ = trainer.train( + step_max=cfg.train.step_max, + load_ckp=load_ckp, + store_ckp=store_ckp, + wandb_config=wandb_config, + ) + + if mode == "infer" or mode == "all": + print("Start inference...") + + if mode == "infer": + model_dir = store_ckp + if mode == "all": + model_dir = os.path.join(store_ckp, "best") + assert osp.isfile(os.path.join(model_dir, "params_tree.pkl")) + + cfg.eval.rollout_dir = model_dir.replace("ckp", "rollout") + os.makedirs(cfg.eval.rollout_dir, exist_ok=True) + + if cfg.eval.infer.n_trajs is None: + cfg.eval.infer.n_trajs = cfg.eval.train.n_trajs + + assert model_dir, "model_dir must be specified for inference." + metrics = infer( + model, + case, + data_test if is_test else data_valid, + load_ckp=model_dir, + cfg_eval_infer=cfg.eval.infer, + rollout_dir=cfg.eval.rollout_dir, + n_rollout_steps=cfg.eval.n_rollout_steps, + seed=cfg.seed, + ) + + split = "test" if is_test else "valid" + print(f"Metrics of {model_dir} on {split} split:") + print(averaged_metrics(metrics)) + + return 0 + + +def setup_data(cfg) -> Tuple[H5Dataset, H5Dataset, Namespace]: + dataset_path = cfg.dataset_path + ckp_dir = cfg.logging.ckp_dir + rollout_dir = cfg.eval.rollout_dir + input_seq_length = cfg.model.input_seq_length + n_rollout_steps = cfg.eval.n_rollout_steps + nl_backend = cfg.neighbors.backend + + if not osp.isabs(dataset_path): + dataset_path = osp.join(os.getcwd(), dataset_path) + + if ckp_dir is not None: + os.makedirs(ckp_dir, exist_ok=True) + if rollout_dir is not None: + os.makedirs(rollout_dir, exist_ok=True) + + # dataloader + data_train = H5Dataset( + "train", + dataset_path=dataset_path, + input_seq_length=input_seq_length, + extra_seq_length=cfg.train.pushforward.unrolls[-1], + nl_backend=nl_backend, + ) + data_valid = H5Dataset( + "valid", + dataset_path=dataset_path, + input_seq_length=input_seq_length, + extra_seq_length=n_rollout_steps, + nl_backend=nl_backend, + ) + data_test = H5Dataset( + "test", + dataset_path=dataset_path, + input_seq_length=input_seq_length, + extra_seq_length=n_rollout_steps, + nl_backend=nl_backend, + ) + + return data_train, data_valid, data_test + + +def setup_model( + cfg, + metadata: Dict, + homogeneous_particles: bool = False, + has_external_force: bool = False, + normalization_stats: Optional[Dict] = None, +) -> Tuple[Callable, Type]: + """Setup model based on cfg.""" + model_name = cfg.model.name.lower() + input_seq_length = cfg.model.input_seq_length + magnitude_features = cfg.model.magnitude_features + + if model_name == "gns": + + def model_fn(x): + return models.GNS( + particle_dimension=metadata["dim"], + latent_size=cfg.model.latent_dim, + blocks_per_step=cfg.model.num_mlp_layers, + num_mp_steps=cfg.model.num_mp_steps, + num_particle_types=NodeType.SIZE, + particle_type_embedding_size=16, + )(x) + + MODEL = models.GNS + elif model_name == "segnn": + # Hx1o vel, Hx0e vel, 2x1o boundary, 9x0e type + node_feature_irreps = node_irreps( + metadata, + input_seq_length, + has_external_force, + magnitude_features, + homogeneous_particles, + ) + # 1o displacement, 0e distance + edge_feature_irreps = Irreps("1x1o + 1x0e") + + def model_fn(x): + return models.SEGNN( + node_features_irreps=node_feature_irreps, + edge_features_irreps=edge_feature_irreps, + scalar_units=cfg.model.latent_dim, + lmax_hidden=cfg.model.lmax_hidden, + lmax_attributes=cfg.model.lmax_attributes, + output_irreps=Irreps("1x1o"), + num_mp_steps=cfg.model.num_mp_steps, + n_vels=cfg.model.input_seq_length - 1, + velocity_aggregate=cfg.model.velocity_aggregate, + homogeneous_particles=homogeneous_particles, + blocks_per_step=cfg.model.num_mlp_layers, + norm=cfg.model.segnn_norm, + )(x) + + MODEL = models.SEGNN + elif model_name == "egnn": + box = cfg.box + if jnp.array(metadata["periodic_boundary_conditions"]).any(): + displacement_fn, shift_fn = space.periodic(jnp.array(box)) + else: + displacement_fn, shift_fn = space.free() + + displacement_fn = jax.vmap(displacement_fn, in_axes=(0, 0)) + shift_fn = jax.vmap(shift_fn, in_axes=(0, 0)) + + def model_fn(x): + return models.EGNN( + hidden_size=cfg.model.latent_dim, + output_size=1, + dt=metadata["dt"] * metadata["write_every"], + displacement_fn=displacement_fn, + shift_fn=shift_fn, + normalization_stats=normalization_stats, + num_mp_steps=cfg.model.num_mp_steps, + n_vels=input_seq_length - 1, + residual=True, + )(x) + + MODEL = models.EGNN + elif model_name == "painn": + assert magnitude_features, "PaiNN requires magnitudes" + radius = metadata["default_connectivity_radius"] * 1.5 + + def model_fn(x): + return models.PaiNN( + hidden_size=cfg.model.latent_dim, + output_size=1, + n_vels=input_seq_length - 1, + radial_basis_fn=models.painn.gaussian_rbf(20, radius, trainable=True), + cutoff_fn=models.painn.cosine_cutoff(radius), + num_mp_steps=cfg.model.num_mp_steps, + )(x) + + MODEL = models.PaiNN + elif model_name == "linear": + + def model_fn(x): + return models.Linear(dim_out=metadata["dim"])(x) + + MODEL = models.Linear + + return model_fn, MODEL diff --git a/lagrangebench/train/strats.py b/lagrangebench/train/strats.py index da47056..a585983 100644 --- a/lagrangebench/train/strats.py +++ b/lagrangebench/train/strats.py @@ -95,7 +95,7 @@ def push_forward_sample_steps(key, step, pushforward): key, key_unroll = jax.random.split(key, 2) # steps needs to be an ordered list - steps = jnp.array(pushforward["steps"]) + steps = jnp.array(pushforward.steps) assert all(steps[i] <= steps[i + 1] for i in range(len(steps) - 1)) # until which index to sample from @@ -103,8 +103,8 @@ def push_forward_sample_steps(key, step, pushforward): unroll_steps = jax.random.choice( key_unroll, - a=jnp.array(pushforward["unrolls"][:idx]), - p=jnp.array(pushforward["probs"][:idx]), + a=jnp.array(pushforward.unrolls[:idx]), + p=jnp.array(pushforward.probs[:idx]), ) return key, unroll_steps diff --git a/lagrangebench/train/trainer.py b/lagrangebench/train/trainer.py index 97d943b..575af5e 100644 --- a/lagrangebench/train/trainer.py +++ b/lagrangebench/train/trainer.py @@ -1,25 +1,25 @@ """Training utils and functions.""" import os +from collections import namedtuple from functools import partial -from typing import Callable, Dict, List, Optional, Tuple +from typing import Callable, Dict, Optional, Tuple, Union import haiku as hk import jax import jax.numpy as jnp import jraph import optax +import wandb from jax import vmap +from omegaconf import DictConfig, OmegaConf from torch.utils.data import DataLoader -from wandb.wandb_run import Run from lagrangebench.data import H5Dataset from lagrangebench.data.utils import numpy_collate from lagrangebench.defaults import defaults from lagrangebench.evaluate import MetricsComputer, averaged_metrics, eval_rollout from lagrangebench.utils import ( - LossConfig, - PushforwardConfig, broadcast_from_batch, broadcast_to_batch, get_kinematic_mask, @@ -40,19 +40,19 @@ def _mse( particle_type: jnp.ndarray, target: jnp.ndarray, model_fn: Callable, - loss_weight: LossConfig, + loss_weight: Dict[str, float], ): pred, state = model_fn(params, state, (features, particle_type)) # check active (non zero) output shapes - keys = list(set(loss_weight.nonzero) & set(pred.keys())) - assert all(target[k].shape == pred[k].shape for k in keys) + assert all(target[k].shape == pred[k].shape for k in pred) # particle mask non_kinematic_mask = jnp.logical_not(get_kinematic_mask(particle_type)) num_non_kinematic = non_kinematic_mask.sum() # loss components losses = [] - for t in keys: - losses.append((loss_weight[t] * (pred[t] - target[t]) ** 2).sum(axis=-1)) + for t in pred: + w = getattr(loss_weight, t) + losses.append((w * (pred[t] - target[t]) ** 2).sum(axis=-1)) total_loss = jnp.array(losses).sum(0) total_loss = jnp.where(non_kinematic_mask, total_loss, 0) total_loss = total_loss.sum() / num_non_kinematic @@ -89,129 +89,132 @@ def _update( return loss, new_params, state, opt_state -def Trainer( - model: hk.TransformedWithState, - case, - data_train: H5Dataset, - data_valid: H5Dataset, - pushforward: Optional[PushforwardConfig] = None, - metrics: List = ["mse"], - seed: int = defaults.seed, - batch_size: int = defaults.batch_size, - input_seq_length: int = defaults.input_seq_length, - noise_std: float = defaults.noise_std, - lr_start: float = defaults.lr_start, - lr_final: float = defaults.lr_final, - lr_decay_steps: int = defaults.lr_decay_steps, - lr_decay_rate: float = defaults.lr_decay_rate, - loss_weight: Optional[LossConfig] = None, - n_rollout_steps: int = defaults.n_rollout_steps, - eval_n_trajs: int = defaults.eval_n_trajs, - rollout_dir: str = defaults.rollout_dir, - out_type: str = defaults.out_type, - log_steps: int = defaults.log_steps, - eval_steps: int = defaults.eval_steps, - metrics_stride: int = defaults.metrics_stride, - num_workers: int = defaults.num_workers, - batch_size_infer: int = defaults.batch_size_infer, -) -> Callable: +class Trainer: """ - Builds a function that automates model training and evaluation. + Trainer class. - Given a model, training and validation datasets and a case this function returns - another function that: + Given a model, case setup, training and validation datasets this class + automates training and evaluation. - 1. Initializes (or resumes from a checkpoint) model, optimizer and loss function. + 1. Initializes (or restarts a checkpoint) model, optimizer and loss function. 2. Trains the model on data_train, using the given pushforward and noise tricks. 3. Evaluates the model on data_valid on the specified metrics. - - Args: - model: (Transformed) Haiku model. - case: Case setup class. - data_train: Training dataset. - data_valid: Validation dataset. - pushforward: Pushforward configuration. None for no pushforward. - metrics: Metrics to evaluate the model on. - seed: Random seed for model init, training tricks and dataloading. - batch_size: Training batch size. - input_seq_length: Input sequence length. Default is 6. - noise_std: Noise standard deviation for the GNS-style noise. - lr_start: Initial learning rate. - lr_final: Final learning rate. - lr_decay_steps: Number of steps to reach the final learning rate. - lr_decay_rate: Learning rate decay rate. - loss_weight: Loss weight object. - n_rollout_steps: Number of autoregressive rollout steps. - eval_n_trajs: Number of trajectories to evaluate. - rollout_dir: Rollout directory. - out_type: Output type. - log_steps: Wandb/screen logging frequency. - eval_steps: Evaluation and checkpointing frequency. - metrics_stride: stride for e_kin and sinkhorn. - num_workers: number of workers for data loading. - batch_size_infer: batch size for validation/testing. - - Returns: - Configured training function. """ - assert isinstance( - model, hk.TransformedWithState - ), "Model must be passed as an Haiku transformed function." - - base_key, seed_worker, generator = set_seed(seed) - - # dataloaders - loader_train = DataLoader( - dataset=data_train, - batch_size=batch_size, - shuffle=True, - num_workers=num_workers, - collate_fn=numpy_collate, - drop_last=True, - worker_init_fn=seed_worker, - generator=generator, - ) - loader_valid = DataLoader( - dataset=data_valid, - batch_size=batch_size_infer, - collate_fn=numpy_collate, - worker_init_fn=seed_worker, - generator=generator, - ) - # learning rate decays from lr_start to lr_final over lr_decay_steps exponentially - lr_scheduler = optax.exponential_decay( - init_value=lr_start, - transition_steps=lr_decay_steps, - decay_rate=lr_decay_rate, - end_value=lr_final, - ) - # optimizer - opt_init, opt_update = optax.adamw(learning_rate=lr_scheduler, weight_decay=1e-8) - - # loss config - loss_weight = LossConfig() if loss_weight is None else LossConfig(**loss_weight) - # pushforward config - if pushforward is None: - pushforward = PushforwardConfig() - - # metrics computer config - metrics_computer = MetricsComputer( - metrics, - dist_fn=case.displacement, - metadata=data_train.metadata, - input_seq_length=data_train.input_seq_length, - stride=metrics_stride, - ) + def __init__( + self, + model: hk.TransformedWithState, + case, + data_train: H5Dataset, + data_valid: H5Dataset, + cfg_train: Union[Dict, DictConfig] = defaults.train, + cfg_eval: Union[Dict, DictConfig] = defaults.eval, + cfg_logging: Union[Dict, DictConfig] = defaults.logging, + input_seq_length: int = defaults.model.input_seq_length, + seed: int = defaults.seed, + ): + """Initializes the trainer. + + Args: + model: (Transformed) Haiku model. + case: Case setup class. + data_train: Training dataset. + data_valid: Validation dataset. + cfg_train: Training configuration. + cfg_eval: Evaluation configuration. + cfg_logging: Logging configuration. + input_seq_length: Input sequence length, i.e. number of past positions. + seed: Random seed for model init, training tricks and dataloading. + """ + + if isinstance(cfg_train, Dict): + cfg_train = OmegaConf.create(cfg_train) + if isinstance(cfg_eval, Dict): + cfg_eval = OmegaConf.create(cfg_eval) + if isinstance(cfg_logging, Dict): + cfg_logging = OmegaConf.create(cfg_logging) + + self.model = model + self.case = case + self.input_seq_length = input_seq_length + # if one of the cfg_* arguments has a subset of the default configs, merge them + self.cfg_train = OmegaConf.merge(defaults.train, cfg_train) + self.cfg_eval = OmegaConf.merge(defaults.eval, cfg_eval) + self.cfg_logging = OmegaConf.merge(defaults.logging, cfg_logging) + + assert isinstance( + model, hk.TransformedWithState + ), "Model must be passed as an Haiku transformed function." + + available_rollout_length = data_valid.subseq_length - input_seq_length + assert cfg_eval.n_rollout_steps <= available_rollout_length, ( + "The loss cannot be evaluated on longer than a ground truth trajectory " + f"({cfg_eval.n_rollout_steps} > {available_rollout_length})" + ) + assert cfg_eval.train.n_trajs <= data_valid.num_samples, ( + f"Number of requested validation trajectories exceeds the available ones " + f"({cfg_eval.train.n_trajs} > {data_valid.num_samples})" + ) + + # set the number of validation trajectories during training + if self.cfg_eval.train.n_trajs == -1: + self.cfg_eval.train.n_trajs = data_valid.num_samples + + # make immutable for jitting + loss_weight = self.cfg_train.loss_weight + self.loss_weight = namedtuple("loss_weight", loss_weight)(**loss_weight) + + self.base_key, seed_worker, generator = set_seed(seed) + + # dataloaders + self.loader_train = DataLoader( + dataset=data_train, + batch_size=self.cfg_eval.train.batch_size, + shuffle=True, + num_workers=self.cfg_train.num_workers, + collate_fn=numpy_collate, + drop_last=True, + worker_init_fn=seed_worker, + generator=generator, + ) + self.loader_valid = DataLoader( + dataset=data_valid, + batch_size=self.cfg_eval.infer.batch_size, + collate_fn=numpy_collate, + worker_init_fn=seed_worker, + generator=generator, + ) + + # exponential learning rate decays from lr_start to lr_final over lr_decay_steps + lr_scheduler = optax.exponential_decay( + init_value=self.cfg_train.optimizer.lr_start, + transition_steps=self.cfg_train.optimizer.lr_decay_steps, + decay_rate=self.cfg_train.optimizer.lr_decay_rate, + end_value=self.cfg_train.optimizer.lr_final, + ) + # optimizer + self.opt_init, self.opt_update = optax.adamw( + learning_rate=lr_scheduler, weight_decay=1e-8 + ) + + # metrics computer config + self.metrics_computer = MetricsComputer( + self.cfg_eval.train.metrics, + dist_fn=self.case.displacement, + metadata=data_train.metadata, + input_seq_length=self.input_seq_length, + stride=self.cfg_eval.train.metrics_stride, + ) - def _train( - step_max: int = defaults.step_max, + def train( + self, + step_max: int = defaults.train.step_max, params: Optional[hk.Params] = None, state: Optional[hk.State] = None, opt_state: Optional[optax.OptState] = None, - store_checkpoint: Optional[str] = None, - load_checkpoint: Optional[str] = None, - wandb_run: Optional[Run] = None, + store_ckp: Optional[str] = None, + load_ckp: Optional[str] = None, + wandb_config: Optional[Dict] = None, ) -> Tuple[hk.Params, hk.State, optax.OptState]: """ Training loop. @@ -224,59 +227,87 @@ def _train( params: Optional model parameters. If provided, training continues from it. state: Optional model state. opt_state: Optional optimizer state. - store_checkpoint: Checkpoints destination. Without it params aren't saved. - load_checkpoint: Initial checkpoint directory. If provided resumes training. - wandb_run: Wandb run. + store_ckp: Checkpoints destination. Without it params aren't saved. + load_ckp: Initial checkpoint directory. If provided resumes training. + wandb_config: Optional configuration to be logged on wandb. Returns: Tuple containing the final model parameters, state and optimizer state. """ - assert n_rollout_steps <= data_valid.subseq_length - input_seq_length, ( - "You cannot evaluate the loss on longer than a ground truth trajectory " - f"({n_rollout_steps}, {data_valid.subseq_length}, {input_seq_length})" - ) - assert eval_n_trajs <= loader_valid.dataset.num_samples, ( - f"eval_n_trajs must be <= loader_valid.dataset.num_samples, but it is " - f"{eval_n_trajs} > {loader_valid.dataset.num_samples}" - ) + + model = self.model + case = self.case + cfg_train = self.cfg_train + cfg_eval = self.cfg_eval + cfg_logging = self.cfg_logging + loader_train = self.loader_train + loader_valid = self.loader_valid + noise_std = cfg_train.noise_std + pushforward = cfg_train.pushforward # Precompile model for evaluation model_apply = jax.jit(model.apply) # loss and update functions - loss_fn = partial(_mse, model_fn=model_apply, loss_weight=loss_weight) - update_fn = partial(_update, loss_fn=loss_fn, opt_update=opt_update) + loss_fn = partial(_mse, model_fn=model_apply, loss_weight=self.loss_weight) + update_fn = partial(_update, loss_fn=loss_fn, opt_update=self.opt_update) # init values pos_input_and_target, particle_type = next(iter(loader_train)) raw_sample = (pos_input_and_target[0], particle_type[0]) - key, features, _, neighbors = case.allocate(base_key, raw_sample) + key, features, _, neighbors = case.allocate(self.base_key, raw_sample) step = 0 if params is not None: # continue training from params if state is None: state = {} - elif load_checkpoint: + elif load_ckp: # continue training from checkpoint - params, state, opt_state, step = load_haiku(load_checkpoint) + params, state, opt_state, step = load_haiku(load_ckp) else: # initialize new model key, subkey = jax.random.split(key, 2) params, state = model.init(subkey, (features, particle_type[0])) - if wandb_run is not None: - wandb_run.log({"info/num_params": get_num_params(params)}, 0) - wandb_run.log({"info/step_start": step}, 0) + # start logging + if cfg_logging.wandb: + if wandb_config is None: + # minimal config reconstruction without model details + wandb_config = { + "train": OmegaConf.to_container(cfg_train), + "eval": OmegaConf.to_container(cfg_eval), + "logging": OmegaConf.to_container(cfg_logging), + "dataset_path": loader_train.dataset.dataset_path, + } + + else: + wandb_config["eval"]["train"]["n_trajs"] = cfg_eval.train.n_trajs + + wandb_config["info"] = { + "dataset_name": loader_train.dataset.name, + "len_train": len(loader_train.dataset), + "len_eval": len(loader_valid.dataset), + "num_params": get_num_params(params).item(), + "step_start": step, + } + + wandb_run = wandb.init( + project=cfg_logging.wandb_project, + entity=cfg_logging.wandb_entity, + name=cfg_logging.run_name, + config=wandb_config, + save_code=True, + ) # initialize optimizer state if opt_state is None: - opt_state = opt_init(params) + opt_state = self.opt_init(params) # create new checkpoint directory - if store_checkpoint is not None: - os.makedirs(store_checkpoint, exist_ok=True) - os.makedirs(os.path.join(store_checkpoint, "best"), exist_ok=True) + if store_ckp is not None: + os.makedirs(store_ckp, exist_ok=True) + os.makedirs(os.path.join(store_ckp, "best"), exist_ok=True) preprocess_vmap = jax.vmap(case.preprocess, in_axes=(0, 0, None, 0, None)) push_forward = push_forward_build(model_apply, case) @@ -302,7 +333,7 @@ def _train( unroll_steps, ) # unroll for push-forward steps - _current_pos = raw_batch[0][:, :, :input_seq_length] + _current_pos = raw_batch[0][:, :, : self.input_seq_length] for _ in range(unroll_steps): if neighbors_batch.did_buffer_overflow.sum() > 0: break @@ -341,28 +372,28 @@ def _train( opt_state=opt_state, ) - if step % log_steps == 0: + if step % cfg_logging.log_steps == 0: loss.block_until_ready() - if wandb_run: + if cfg_logging.wandb: wandb_run.log({"train/loss": loss.item()}, step) else: step_str = str(step).zfill(len(str(int(step_max)))) print(f"{step_str}, train/loss: {loss.item():.5f}.") - if step % eval_steps == 0 and step > 0: + if step % cfg_logging.eval_steps == 0 and step > 0: nbrs = broadcast_from_batch(neighbors_batch, index=0) eval_metrics = eval_rollout( case=case, - metrics_computer=metrics_computer, + metrics_computer=self.metrics_computer, model_apply=model_apply, params=params, state=state, neighbors=nbrs, loader_eval=loader_valid, - n_rollout_steps=n_rollout_steps, - n_trajs=eval_n_trajs, - rollout_dir=rollout_dir, - out_type=out_type, + n_rollout_steps=cfg_eval.n_rollout_steps, + n_trajs=cfg_eval.train.n_trajs, + rollout_dir=cfg_eval.rollout_dir, + out_type=cfg_eval.train.out_type, ) metrics = averaged_metrics(eval_metrics) @@ -370,12 +401,10 @@ def _train( "step": step, "loss": metrics.get("val/loss", None), } - if store_checkpoint is not None: - save_haiku( - store_checkpoint, params, state, opt_state, metadata_ckp - ) + if store_ckp is not None: + save_haiku(store_ckp, params, state, opt_state, metadata_ckp) - if wandb_run: + if cfg_logging.wandb: wandb_run.log(metrics, step) else: print(metrics) @@ -384,6 +413,7 @@ def _train( if step == step_max + 1: break - return params, state, opt_state + if cfg_logging.wandb: + wandb_run.finish() - return _train + return params, state, opt_state diff --git a/lagrangebench/utils.py b/lagrangebench/utils.py index 9589e39..9255fd8 100644 --- a/lagrangebench/utils.py +++ b/lagrangebench/utils.py @@ -5,8 +5,7 @@ import os import pickle import random -from dataclasses import dataclass, field -from typing import Callable, List, Tuple +from typing import Callable, Tuple import cloudpickle import jax @@ -15,7 +14,6 @@ import torch -# TODO look for a better place to put this and get_kinematic_mask class NodeType(enum.IntEnum): """Particle types.""" @@ -161,37 +159,3 @@ def seed_worker(_): generator.manual_seed(seed) return key, seed_worker, generator - - -@dataclass(frozen=True) -class LossConfig: - """Weights for the different targets in the loss function.""" - - pos: float = 0.0 - vel: float = 0.0 - acc: float = 1.0 - - def __getitem__(self, item): - return getattr(self, item) - - @property - def nonzero(self): - return [field for field in self.__annotations__ if self[field] != 0] - - -@dataclass(frozen=False) -class PushforwardConfig: - """Pushforward trick configuration. - - Attributes: - steps: When to introduce each unroll stage, e.g. [-1, 20000, 50000] - unrolls: For how many timesteps to unroll, e.g. [0, 1, 20] - probs: Probability (ratio) between the relative unrolls, e.g. [5, 4, 1] - """ - - steps: List[int] = field(default_factory=lambda: [-1]) - unrolls: List[int] = field(default_factory=lambda: [0]) - probs: List[float] = field(default_factory=lambda: [1.0]) - - def __getitem__(self, item): - return getattr(self, item) diff --git a/main.py b/main.py index bc25a09..d0ea576 100644 --- a/main.py +++ b/main.py @@ -1,38 +1,58 @@ import os -import pprint -from argparse import Namespace -import yaml +from omegaconf import DictConfig, OmegaConf + + +def load_embedded_configs(config_path: str, cli_args: DictConfig) -> DictConfig: + """Loads all 'extends' embedded configs and merge them with the cli overwrites.""" + + cfgs = [OmegaConf.load(config_path)] + while "extends" in cfgs[0]: + extends_path = cfgs[0]["extends"] + del cfgs[0]["extends"] + + # go to parents configs until the defaults are reached + if extends_path != "LAGRANGEBENCH_DEFAULTS": + cfgs = [OmegaConf.load(extends_path)] + cfgs + else: + from lagrangebench.defaults import defaults + + cfgs = [defaults] + cfgs + break + + # merge all embedded configs and give highest priority to cli_args + cfg = OmegaConf.merge(*cfgs, cli_args) + return cfg -from experiments.config import NestedLoader, cli_arguments if __name__ == "__main__": - cli_args = cli_arguments() - if "config" in cli_args: # to (re)start training - config_path = cli_args["config"] - elif "model_dir" in cli_args: # to run inference - config_path = os.path.join(cli_args["model_dir"], "config.yaml") - - with open(config_path, "r") as f: - args = yaml.load(f, NestedLoader) - - # priority to command line arguments - args.update(cli_args) - args = Namespace(config=Namespace(**args), info=Namespace()) - print("#" * 79, "\nStarting a LagrangeBench run with the following configs:") - pprint.pprint(vars(args.config)) - print("#" * 79) + # TODO: add optional wandb.sweeps + + cli_args = OmegaConf.from_cli() + assert ("config" in cli_args) != ( + "load_ckp" in cli_args + ), "You must specify one of 'config' or 'load_ckp'." + + if "config" in cli_args: # start from config.yaml + config_path = cli_args.config + elif "load_ckp" in cli_args: # start from a checkpoint + config_path = os.path.join(cli_args.load_ckp, "config.yaml") + + # values that need to be specified before importing jax + cli_args.gpu = cli_args.get("gpu", -1) + cli_args.xla_mem_fraction = cli_args.get("xla_mem_fraction", 0.75) # specify cuda device os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" # see issue #152 from TensorFlow - os.environ["CUDA_VISIBLE_DEVICES"] = str(args.config.gpu) - os.environ["XLA_PYTHON_CLIENT_MEM_FRACTION"] = str(args.config.xla_mem_fraction) + os.environ["CUDA_VISIBLE_DEVICES"] = str(cli_args.gpu) + os.environ["XLA_PYTHON_CLIENT_MEM_FRACTION"] = str(cli_args.xla_mem_fraction) - if args.config.f64: - from jax import config + cfg = load_embedded_configs(config_path, cli_args) - config.update("jax_enable_x64", True) + print("#" * 79, "\nStarting a LagrangeBench run with the following configs:") + print(OmegaConf.to_yaml(cfg)) + print("#" * 79) - from experiments.run import train_or_infer + from lagrangebench.runner import train_or_infer - train_or_infer(args) + train_or_infer(cfg) diff --git a/notebooks/tutorial.ipynb b/notebooks/tutorial.ipynb index f2e5c36..7cf59bb 100644 --- a/notebooks/tutorial.ipynb +++ b/notebooks/tutorial.ipynb @@ -24,7 +24,7 @@ }, { "cell_type": "code", - "execution_count": 1, + "execution_count": 8, "metadata": {}, "outputs": [], "source": [ @@ -46,12 +46,12 @@ "metadata": {}, "source": [ "## Datasets\n", - "First thing to do is to load the dataset. The simplest way to do this is by using e.g. the `lagrangebench.data.TGV2D` class for the 2-dimensional Taylor-Green vortex problem. It will automatically download the HDF5 files if they are not found in the respective folder, and it will take care of setting up the dataset. Note that for the validation/test set you need to specify a positive number of rollout steps, e.g. `extra_seq_length=20`. This means that the dataset will not split the trajectory into subsequences and keep whole rollouts for evaluation." + "First thing to do is to load the dataset. The simplest way to do this is by using e.g. the `lagrangebench.TGV2D` class for the 2-dimensional Taylor-Green vortex problem. It will automatically download the HDF5 files if they are not found in the respective folder, and it will take care of setting up the dataset. Note that for the validation/test set you need to specify a positive number of rollout steps, e.g. `extra_seq_length=20`. This means that the dataset will not split the trajectory into subsequences and keep whole rollouts for evaluation." ] }, { "cell_type": "code", - "execution_count": 2, + "execution_count": 9, "metadata": {}, "outputs": [ { @@ -66,8 +66,8 @@ } ], "source": [ - "tgv2d_train = lagrangebench.data.TGV2D(\"train\", extra_seq_length=5) # extra_seq_length=5 will be clear later\n", - "tgv2d_valid = lagrangebench.data.TGV2D(\"valid\", extra_seq_length=20)\n", + "tgv2d_train = lagrangebench.TGV2D(\"train\", extra_seq_length=5) # extra_seq_length=5 will be clear later\n", + "tgv2d_valid = lagrangebench.TGV2D(\"valid\", extra_seq_length=20)\n", "\n", "print(\n", " f\"This is a {tgv2d_train.metadata['dim']}D dataset \"\n", @@ -84,11 +84,11 @@ "source": [ "Similarly, for other datasets one can use the respective subclass, for example\n", "```python\n", - "rpf_3d_data = lagrangebench.data.RPF3D(\"train\") # 3D Reverse Poiseuille flow\n", - "dam_2d_data = lagrangebench.data.DAM2D(\"train\") # 2D Dam break\n", + "rpf_3d_data = lagrangebench.RPF3D(\"train\") # 3D Reverse Poiseuille flow\n", + "dam_2d_data = lagrangebench.DAM2D(\"train\") # 2D Dam break\n", "# etc.\n", "# and in general: \n", - "lagrangebench.data.H5Dataset(\"train\", dataset_path=\"path/to/dataset\")\n", + "lagrangebench.H5Dataset(\"train\", dataset_path=\"path/to/dataset\")\n", "```" ] }, @@ -98,14 +98,14 @@ "metadata": {}, "source": [ "## Models\n", - "All models should inherit from [`models.BaseModel`](/lagrangebench/models/base.py), and generally include a `_transform` function for feature engineering and graph building. \n", + "All models should inherit from [`models.BaseModel`](../lagrangebench/models/base.py), and generally also include a `_transform` function for feature engineering and graph building. \n", "\n", "Here we use a small GNS model, with latent dimension of 16 and 4 message passing layers and predicting 2D accelerations. Note that we use a function wrapper beause `haiku.Modules` must be initialized inside `haiku.transform`.\n" ] }, { "cell_type": "code", - "execution_count": 3, + "execution_count": 10, "metadata": {}, "outputs": [], "source": [ @@ -129,7 +129,7 @@ }, { "cell_type": "code", - "execution_count": 4, + "execution_count": 11, "metadata": {}, "outputs": [], "source": [ @@ -153,17 +153,17 @@ }, { "cell_type": "code", - "execution_count": 5, + "execution_count": 12, "metadata": {}, "outputs": [], "source": [ "noise_std = 3e-4\n", "\n", - "pf_config = lagrangebench.PushforwardConfig(\n", - " steps=[-1, 500, 700], # training steps to unlock the relative stage\n", - " unrolls=[0, 2, 5], # number of unroll steps per stage\n", - " probs=[7, 2, 1], # relative probabilities to unroll to the relative stage\n", - ")" + "pf_config = {\n", + " \"steps\": [-1, 500, 700], # training steps to unlock the relative stage\n", + " \"unrolls\": [0, 2, 5], # number of unroll steps per stage\n", + " \"probs\": [7, 2, 1], # relative probabilities to unroll to the relative stage\n", + "}" ] }, { @@ -173,7 +173,7 @@ "source": [ "For example, this configuration would apply noise with `std=3e-4` and pushforward with three unroll stages (0, 2 and 5), \"unlocking\" the second stage after 500 training steps and the third stage after 700 training steps. After 700 steps, 0-step unroll (normal, 1-step training) will happen with a probability of 70%, 2-step unroll with a probability of 20% and finally 5-step unroll with a probability of 10%.\n", "\n", - "Pushforward up to 5 steps is the reason why we created the training dataset as `lagrangebench.data.TGV2D(\"train\", extra_seq_length=5)`, as or every sample from the dataset we need up to 5 steps of unroll." + "Pushforward up to 5 steps is the reason why we created the training dataset as `lagrangebench.TGV2D(\"train\", extra_seq_length=5)`, as or every sample from the dataset we need up to 5 steps of unroll." ] }, { @@ -187,7 +187,7 @@ }, { "cell_type": "code", - "execution_count": 6, + "execution_count": 21, "metadata": {}, "outputs": [], "source": [ @@ -197,8 +197,8 @@ "tgv2d_case = lagrangebench.case_builder(\n", " box=box, # (x,y) array with the world size along each axis. (1.0, 1.0) for 2D TGV\n", " metadata=tgv2d_train.metadata, # metadata dictionary\n", - " input_seq_length=6, # number of consecutive time steps fed to the model\n", - " isotropic_norm=False, # whether to normalize each dimension independently\n", + " input_seq_length = 6, # number of consecutive time steps fed to the model\n", + " cfg_model={\"isotropic_norm\": False}, # normalize each dimension independently\n", " noise_std=noise_std, # noise standard deviation used by the random-walk noise\n", ")" ] @@ -209,72 +209,86 @@ "metadata": {}, "source": [ "## Training and inference\n", - "Finally, to train GNS on Taylor Green (with noise and pushforward) the `lagrangebench.Trainer` methods comes to hand" + "Finally, to train GNS on Taylor Green (with noise and pushforward) the `lagrangebench.Trainer` class comes to hand.\n", + "\n", + "It is worth noting that the `Trainer` class (also `infer` and `case_builder`) expect a nested dictionary structure for the configuration. More details about the expected attributes and shape can be found in [`defaults.py`](../lagrangebench/defaults.py). The missin arguments are automatically filled with the default values." + ] + }, + { + "cell_type": "code", + "execution_count": 22, + "metadata": {}, + "outputs": [], + "source": [ + "# nested training configuration\n", + "cfg_train = {\n", + " \"noise_std\": noise_std, # noise standard deviation\n", + " \"pushforward\": pf_config, # pushforward configuration\n", + " \"optimizer\": {\n", + " \"lr_start\": 5e-4, # initial learning rate\n", + " \"lr_decay_steps\": 1000, # exponentially decay the learning rate for 1000 steps\n", + " }\n", + "}\n", + "\n", + "# nested evaluation configuration\n", + "cfg_eval = {\n", + " \"n_rollout_steps\": 20, # number of steps to rollout the model in evaluation\n", + " \"train\": {\n", + " \"metrics\": [\"mse\"], # list of metrics to compute during evaluation\n", + " \"n_trajs\": 1, # number of trajectories to evaluate\n", + " \"batch_size\": 1, # batch size for parallel evaluation\n", + " }\n", + "}\n", + "\n", + "cfg_logging = {\n", + " \"log_steps\": 100, # log training loss every 100 steps\n", + " \"eval_steps\": 500, # evaluate the model every 500 steps\n", + "}" ] }, { "cell_type": "code", - "execution_count": 7, + "execution_count": 23, "metadata": {}, "outputs": [ - { - "name": "stderr", - "output_type": "stream", - "text": [ - "/home/atoshev/code/lagrangebench/.venv/lib/python3.10/site-packages/jax/_src/ops/scatter.py:94: FutureWarning: scatter inputs have incompatible types: cannot safely cast value from dtype=int64 to dtype=int32 with jax_numpy_dtype_promotion='standard'. In future JAX releases this will result in an error.\n", - " warnings.warn(\n" - ] - }, { "name": "stdout", "output_type": "stream", "text": [ "0000, train/loss: 2.17292.\n", - "0100, train/loss: 0.18065.\n", - "0200, train/loss: 0.19340.\n", - "0300, train/loss: 0.20835.\n", - "0400, train/loss: 0.14294.\n", - "0500, train/loss: 0.11689.\n", - "(eval) Reallocate neighbors list at step 3\n" - ] - }, - { - "name": "stderr", - "output_type": "stream", - "text": [ - "/home/atoshev/code/lagrangebench/.venv/lib/python3.10/site-packages/jax/_src/ops/scatter.py:94: FutureWarning: scatter inputs have incompatible types: cannot safely cast value from dtype=int64 to dtype=int32 with jax_numpy_dtype_promotion='standard'. In future JAX releases this will result in an error.\n", - " warnings.warn(\n" - ] - }, - { - "name": "stdout", - "output_type": "stream", - "text": [ - "(eval) From (2, 21057) to (2, 21200)\n", - "(eval) Reallocate neighbors list at step 4\n", - "(eval) From (2, 21200) to (2, 21835)\n", + "0100, train/loss: 0.18017.\n", + "0200, train/loss: 0.19309.\n", + "0300, train/loss: 0.21081.\n", + "0400, train/loss: 0.14229.\n", + "0500, train/loss: 0.13048.\n", + "(eval) Reallocate neighbors list at step 3\n", + "(eval) From (2, 21057) to (2, 21340)\n", + "(eval) Reallocate neighbors list at step 5\n", + "(eval) From (2, 21340) to (2, 24547)\n", + "(eval) Reallocate neighbors list at step 6\n", + "(eval) From (2, 24547) to (2, 29340)\n", "(eval) Reallocate neighbors list at step 7\n", - "(eval) From (2, 21835) to (2, 30975)\n", - "(eval) Reallocate neighbors list at step 8\n", - "(eval) From (2, 30975) to (2, 35677)\n", - "{'val/loss': 0.0032759700912061017, 'val/mse1': 1.752762669147577e-06, 'val/mse10': 0.0004931334458300185, 'val/mse5': 6.879239107686073e-05, 'val/stdloss': 0.00293470282787705, 'val/stdmse1': 1.673463006869998e-06, 'val/stdmse10': 0.0004534740995101451, 'val/stdmse5': 6.43755024564491e-05}\n", - "0600, train/loss: 0.02715.\n", - "0700, train/loss: 1.58997.\n", - "0800, train/loss: 1.85135.\n", - "Reallocate neighbors list at step 805\n", - "From (2, 21057) to (2, 20792)\n", - "0900, train/loss: 0.01133.\n", - "1000, train/loss: 0.01651.\n", + "(eval) From (2, 29340) to (2, 36260)\n", + "{'val/loss': 0.009176546643137027, 'val/mse1': 4.201952603693741e-06, 'val/mse10': 0.0013514320301201014, 'val/mse5': 0.0001816913672696961, 'val/stdloss': 0.0, 'val/stdmse1': 0.0, 'val/stdmse10': 0.0, 'val/stdmse5': 0.0}\n", + "0600, train/loss: 0.01343.\n", + "0700, train/loss: 1.96427.\n", + "Reallocate neighbors list at step 772\n", + "From (2, 21057) to (2, 20557)\n", + "0800, train/loss: 0.13076.\n", + "Reallocate neighbors list at step 804\n", + "From (2, 20557) to (2, 20742)\n", + "0900, train/loss: 0.02982.\n", + "1000, train/loss: 0.19349.\n", "(eval) Reallocate neighbors list at step 3\n", - "(eval) From (2, 20792) to (2, 21027)\n", - "(eval) Reallocate neighbors list at step 6\n" + "(eval) From (2, 20742) to (2, 21182)\n", + "(eval) Reallocate neighbors list at step 5\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ - "/home/atoshev/code/lagrangebench/.venv/lib/python3.10/site-packages/jax/_src/ops/scatter.py:94: FutureWarning: scatter inputs have incompatible types: cannot safely cast value from dtype=int64 to dtype=int32 with jax_numpy_dtype_promotion='standard'. In future JAX releases this will result in an error.\n", + "/home/ggalletti/git/lagrangebench/.venv/lib/python3.10/site-packages/jax/_src/ops/scatter.py:94: FutureWarning: scatter inputs have incompatible types: cannot safely cast value from dtype=int64 to dtype=int32 with jax_numpy_dtype_promotion='standard'. In future JAX releases this will result in an error.\n", " warnings.warn(\n" ] }, @@ -282,12 +296,12 @@ "name": "stdout", "output_type": "stream", "text": [ - "(eval) From (2, 21027) to (2, 23572)\n", + "(eval) From (2, 21182) to (2, 23255)\n", + "(eval) Reallocate neighbors list at step 7\n", + "(eval) From (2, 23255) to (2, 29695)\n", "(eval) Reallocate neighbors list at step 8\n", - "(eval) From (2, 23572) to (2, 27870)\n", - "(eval) Reallocate neighbors list at step 19\n", - "(eval) From (2, 27870) to (2, 31962)\n", - "{'val/loss': 0.00248120749930739, 'val/mse1': 1.393298525555248e-06, 'val/mse10': 0.0003490763834267208, 'val/mse5': 4.809697254341651e-05, 'val/stdloss': 0.002061295717414723, 'val/stdmse1': 1.3039043218413363e-06, 'val/stdmse10': 0.00029981220563334287, 'val/stdmse5': 4.274236635219637e-05}\n" + "(eval) From (2, 29695) to (2, 34125)\n", + "{'val/loss': 0.005788618287444577, 'val/mse1': 3.338676915610406e-06, 'val/mse10': 0.0008740812951579521, 'val/mse5': 0.00012519267697604273, 'val/stdloss': 0.0, 'val/stdmse1': 0.0, 'val/stdmse10': 0.0, 'val/stdmse5': 0.0}\n" ] } ], @@ -297,18 +311,13 @@ " case=tgv2d_case,\n", " data_train=tgv2d_train,\n", " data_valid=tgv2d_valid,\n", - " pushforward=pf_config,\n", - " noise_std=noise_std,\n", - " metrics=[\"mse\"],\n", - " n_rollout_steps=20,\n", - " eval_n_trajs=1,\n", - " lr_start=5e-4,\n", - " log_steps=100,\n", - " eval_steps=500,\n", - " batch_size_infer=1,\n", + " cfg_train=cfg_train,\n", + " cfg_eval=cfg_eval,\n", + " cfg_logging=cfg_logging,\n", + " input_seq_length=6, # number of consecutive time steps fed to the model\n", ")\n", "\n", - "params, state, _ = trainer(step_max=1000)" + "params, state, _ = trainer.train(step_max=1000)" ] }, { @@ -321,7 +330,7 @@ }, { "cell_type": "code", - "execution_count": 8, + "execution_count": 24, "metadata": {}, "outputs": [], "source": [ @@ -338,21 +347,36 @@ }, { "cell_type": "code", - "execution_count": 9, + "execution_count": 33, + "metadata": {}, + "outputs": [], + "source": [ + "# nested evaluation configuration\n", + "cfg_eval_infer = {\n", + " \"metrics\": [\"mse\", \"sinkhorn\"], # list of metrics to compute during evaluation\n", + " \"n_trajs\": 1, # number of trajectories to evaluate\n", + " \"batch_size\": 1, # batch size for parallel evaluation\n", + " \"out_type\": \"pkl\", # rollout trajectory output type: pkl or vtk\n", + "}" + ] + }, + { + "cell_type": "code", + "execution_count": 34, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ - "(eval) Reallocate neighbors list at step 5\n" + "(eval) Reallocate neighbors list at step 3\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ - "/home/atoshev/code/lagrangebench/.venv/lib/python3.10/site-packages/jax/_src/ops/scatter.py:94: FutureWarning: scatter inputs have incompatible types: cannot safely cast value from dtype=int64 to dtype=int32 with jax_numpy_dtype_promotion='standard'. In future JAX releases this will result in an error.\n", + "/home/ggalletti/git/lagrangebench/.venv/lib/python3.10/site-packages/jax/_src/ops/scatter.py:94: FutureWarning: scatter inputs have incompatible types: cannot safely cast value from dtype=int64 to dtype=int32 with jax_numpy_dtype_promotion='standard'. In future JAX releases this will result in an error.\n", " warnings.warn(\n" ] }, @@ -360,39 +384,36 @@ "name": "stdout", "output_type": "stream", "text": [ - "(eval) From (2, 20597) to (2, 22350)\n", + "(eval) From (2, 20597) to (2, 21145)\n", "(eval) Reallocate neighbors list at step 6\n", - "(eval) From (2, 22350) to (2, 23725)\n", - "(eval) Reallocate neighbors list at step 8\n", - "(eval) From (2, 23725) to (2, 28452)\n" + "(eval) From (2, 21145) to (2, 25922)\n", + "(eval) Reallocate neighbors list at step 7\n", + "(eval) From (2, 25922) to (2, 30015)\n" ] } ], "source": [ "metrics = lagrangebench.infer(\n", " gns,\n", - " tgv2d_case,\n", - " tgv2d_test,\n", - " params,\n", - " state,\n", - " metrics=[\"mse\", \"sinkhorn\"],\n", - " eval_n_trajs=1,\n", - " n_rollout_steps=20,\n", - " rollout_dir=\"rollouts/\",\n", - " out_type=\"pkl\",\n", - " batch_size=1,\n", + " case=tgv2d_case,\n", + " data_test=tgv2d_test,\n", + " params=params,\n", + " state=state,\n", + " cfg_eval_infer=cfg_eval_infer,\n", + " n_rollout_steps=20, # number of steps to rollout the model in evaluation\n", + " rollout_dir=\"rollouts/\", # directory to save rollouts\n", ")[\"rollout_0\"]\n", "rollout = pickle.load(open(\"rollouts/rollout_0.pkl\", \"rb\"))" ] }, { "cell_type": "code", - "execution_count": 10, + "execution_count": 35, "metadata": {}, "outputs": [ { "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAAA+cAAAF2CAYAAAAMW+lzAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjguMiwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8g+/7EAAAACXBIWXMAAA9hAAAPYQGoP6dpAACV4klEQVR4nOzdeVxVdf7H8dcFZFPBnU0UUnIXEBQhHbUoKiqZzNAWzSyrKdPQTE2xdSjLMpcZss1myjTTrNQo03YZTAUVd3PBDQQRUJDt3vP7w1+3uSOamHpZ3s/H4zyI7/18z/ncU/G9n3vO+X5NhmEYiIiIiIiIiIjdONg7AREREREREZH6TsW5iIiIiIiIiJ2pOBcRERERERGxMxXnIiIiIiIiInam4lxERERERETEzlSci4iIiIiIiNiZinMRERERERERO1NxLiIiIiIiImJnKs5FRERERERE7EzFuYiIiIhILXLfffcREBBw0X0bNWr0h3EBAQHccsstF3UMEbk4Ks5FRERERERE7MzJ3gmIiIiIiMiFe+utt7BYLPZOQ0QuMRXnIiIiIiK1SIMGDeydwiVRWlqKs7MzDg66mVcEdFu7SL31zDPPYDKZ2LVrF/fccw+enp60bNmSqVOnYhgGBw8eZODAgXh4eODt7c2MGTNs+s+ePZsuXbrg7u5O06ZNCQ8PZ8GCBTYxhw8f5v7778fLywsXFxe6dOnCu+++eyXfpoiISK1z8uRJxo4dS0BAAC4uLrRq1Yrrr7+ejRs3Amc/c75//35MJhOvvvoq8+bNo127dri4uNCzZ09++eWXPzxeRkYGLVu2pH///pw6dcrmtZ9++olevXrh6urKVVddxb/+9a+z+u/du5fBgwfTrFkz3N3d6d27NytWrLCJ+e677zCZTCxcuJApU6bg5+eHu7s7RUVF1ufgDx8+TFxcHI0aNaJly5aMHz8es9l8EWdQpHbSlXORei4+Pp5OnTrx0ksvsWLFCl544QWaNWvGm2++ybXXXsvLL7/Mhx9+yPjx4+nZsyd/+ctfeOutt3j88ce54447GDNmDKWlpWzevJm0tDTuuusuAHJycujduzcmk4nHHnuMli1b8uWXXzJy5EiKiooYO3asfd+4iIhIDfXwww/zySef8Nhjj9G5c2eOHz/OTz/9xPbt2+nRo8c5+y1YsICTJ0/y0EMPYTKZmD59Orfffjt79+4959X2X375hZiYGMLDw/nss89wc3OzvrZnzx7uuOMORo4cyfDhw3n33Xe57777CAsLo0uXLsCZ8T4qKoqSkhIef/xxmjdvzvvvv89tt93GJ598wl//+leb4z3//PM4Ozszfvx4ysrKcHZ2BsBsNhMTE0NERASvvvoq33zzDTNmzKBdu3Y88sgjf/aUitQOhojUS9OmTTMAY9SoUda2yspKo3Xr1obJZDJeeukla/uJEycMNzc3Y/jw4YZhGMbAgQONLl26nHf/I0eONHx8fIy8vDyb9iFDhhienp5GSUnJpXszIiIidYinp6fx6KOPnvP14cOHG23btrX+vm/fPgMwmjdvbuTn51vbP/vsMwMwvvjiC5u+DRs2NAzDMH766SfDw8PDiI2NNUpLS22O0bZtWwMwfvjhB2vbsWPHDBcXF2PcuHHWtrFjxxqA8eOPP1rbTp48aQQGBhoBAQGG2Ww2DMMwvv32WwMwrrrqqrM+AwwfPtwAjOeee86mPTQ01AgLCzvneRCpa3Rbu0g998ADD1j/2dHRkfDwcAzDYOTIkdb2Jk2a0KFDB/bu3Wv9/dChQ+e8Vc4wDJYsWcKtt96KYRjk5eVZt5iYGAoLC6235omIiIitJk2akJaWxpEjR6rVLz4+nqZNm1p/79u3L4B1/P5v3377LTExMVx33XUsXboUFxeXs2I6d+5s3QdAy5YtbT4PAKxcuZJevXrRp08fa1ujRo0YNWoU+/fvZ9u2bTb7HD58uM3V+f/28MMP2/zet2/fKnMXqatUnIvUc23atLH53dPTE1dXV1q0aHFW+4kTJwB46qmnaNSoEb169SIoKIhHH32Un3/+2Rqbm5tLQUEB8+bNo2XLljbbiBEjADh27NhlfmciIiK10/Tp08nMzMTf359evXrxzDPPXFCR+r9j+m+F+m/j929KS0uJjY0lNDSUjz/+2Hpr+R/t77d9/vf+Dhw4QIcOHc6K69Spk/X1/xYYGFjlsVxdXWnZsuV5jyVS16k4F6nnHB0dL6gNzlwRhzMD7s6dO1m4cCF9+vRhyZIl9OnTh2nTpgFYl3e55557WLVqVZXbNddcc5nekYiISO125513snfvXmbPno2vry+vvPIKXbp04csvvzxvvz8av3/j4uJCbGwsaWlppKSk/On9Vce5rpqf61gi9YkmhBORi9KwYUPi4+OJj4+nvLyc22+/nRdffJFJkybRsmVLGjdujNlsJjo62t6pioiI1Do+Pj787W9/429/+xvHjh2jR48evPjii9x0001/et8mk4kPP/yQgQMHMnjwYL788kv69+9/Uftq27YtO3fuPKt9x44d1tdF5MLoyrmIVNvx48dtfnd2dqZz584YhkFFRQWOjo4MGjSIJUuWkJmZeVb/3NzcK5WqiIhIrWI2myksLLRpa9WqFb6+vpSVlV2y4zg7O7N06VJ69uzJrbfeyrp16y5qPzfffDPr1q0jNTXV2lZcXMy8efMICAigc+fOlyplkTpPV85FpNpuuOEGvL29ueaaa/Dy8mL79u3MmTOH2NhYGjduDMBLL73Et99+S0REBA8++CCdO3cmPz+fjRs38s0335Cfn2/ndyEiIlLznDx5ktatW3PHHXcQHBxMo0aN+Oabb/jll1+YMWPGJT2Wm5sby5cv59prr+Wmm27i+++/p2vXrtXax8SJE/noo4+46aabePzxx2nWrBnvv/8++/btY8mSJTg46FqgyIVScS4i1fbQQw/x4Ycf8tprr3Hq1Clat27N448/zpQpU6wxXl5erFu3jueee46lS5fyj3/8g+bNm9OlSxdefvllO2YvIiJSc7m7u/O3v/2Nr7/+mqVLl2KxWGjfvj3/+Mc/Lst63x4eHnz11Vf85S9/4frrr+fHH3+kffv2F9zfy8uLtWvX8tRTTzF79mxKS0vp3r07X3zxBbGxsZc8X5G6zGT8mRkdRERERERERORP030mIiIiIiIiInam4lxERERERETEzlSci4iIiIiIiNiZinMRERERERERO1NxLiIiIiIiImJnKs5FRERERERE7KzerHNusVg4cuQIjRs3xmQy2TsdERERDMPg5MmT+Pr64uCg78svBY33IiJSk1RrrDcuwpw5c4y2bdsaLi4uRq9evYy0tLTzxn/88cdGhw4dDBcXF6Nr167GihUrbF5fsmSJcf311xvNmjUzACM9Pb3K/axdu9YYMGCA4e7ubjRu3Njo27evUVJSckE5Hzx40AC0adOmTZu2GrcdPHjwgsYy+WMa77Vp06ZNW03cLmSsr/aV80WLFpGQkEBycjIRERHMnDmTmJgYdu7cSatWrc6KX7t2LUOHDiUpKYlbbrmFBQsWEBcXx8aNG+natSsAxcXF9OnThzvvvJMHH3ywyuOmpqZy4403MmnSJGbPno2TkxObNm264CsNjRs3BuDgwYN4eHhU922LiIhcckVFRfj7+1vHKPnzNN6LiEhNUp2x3mQYhlGdnUdERNCzZ0/mzJkDnLl9zN/fn9GjRzNx4sSz4uPj4ykuLmb58uXWtt69exMSEkJycrJN7P79+wkMDCQ9PZ2QkBCb13r37s3111/P888/X510rYqKivD09KSwsFCDtYiI1Agamy49nVMREalJqjMuVesBt/LycjZs2EB0dPTvO3BwIDo6mtTU1Cr7pKam2sQDxMTEnDO+KseOHSMtLY1WrVoRFRWFl5cX/fr146effqpO+iIiIiIiIiI1UrWK87y8PMxmM15eXjbtXl5eZGdnV9knOzu7WvFV2bt3LwDPPPMMDz74ICkpKfTo0YPrrruO3bt3V9mnrKyMoqIim01ERERERESkJqoVU8NaLBYAHnroIUaMGEFoaCivv/46HTp04N13362yT1JSEp6entbN39//SqYsIiIiIiIicsGqVZy3aNECR0dHcnJybNpzcnLw9vauso+3t3e14qvi4+MDQOfOnW3aO3XqRFZWVpV9Jk2aRGFhoXU7ePDgBR9PRERERERE5EqqVnHu7OxMWFgYq1evtrZZLBZWr15NZGRklX0iIyNt4gFWrVp1zviqBAQE4Ovry86dO23ad+3aRdu2bavs4+LigoeHh80mIiIiIiIiUhNVeym1hIQEhg8fTnh4OL169WLmzJkUFxczYsQIAIYNG4afnx9JSUkAjBkzhn79+jFjxgxiY2NZuHAh69evZ968edZ95ufnk5WVxZEjRwCsRbi3tzfe3t6YTCaefPJJpk2bRnBwMCEhIbz//vvs2LGDTz755E+fBBERERERERF7qnZxHh8fT25uLomJiWRnZxMSEkJKSop10resrCybtcejoqJYsGABU6ZMYfLkyQQFBbFs2TLrGucAn3/+ubW4BxgyZAgA06ZN45lnngFg7NixlJaW8sQTT5Cfn09wcDCrVq2iXbt2F/XGRURERERERGqKaq9zXltp3VMREalpNDZdejqnIiJSk1y2dc5FRESk5ps7dy4BAQG4uroSERHBunXrzhu/ePFiOnbsiKurK926dWPlypU2rxuGQWJiIj4+Pri5uREdHX3WUqYvvvgiUVFRuLu706RJk7OOMX/+fEwmU5XbsWPHAPjuu++qfL06y6+KiIjUVirORUREqqGs0mzvFM5r0aJFJCQkMG3aNDZu3EhwcDAxMTHWAvh/rV27lqFDhzJy5EjS09OJi4sjLi6OzMxMa8z06dOZNWsWycnJpKWl0bBhQ2JiYigtLbXGlJeXM3jwYB555JEqjxMfH8/Ro0dttpiYGPr160erVq1sYnfu3GkT97+vi4iIXG6lFVd+vNdt7SIiIheotMLMzbN+ZECHViRcfzUNXao9dYuNyzE2RURE0LNnT+bMmQOcWVXF39+f0aNHM3HixLPi4+PjKS4uZvny5da23r17ExISQnJyMoZh4Ovry7hx4xg/fjwAhYWFeHl5MX/+fOs8Mb+ZP38+Y8eOpaCg4Lx55ubm4ufnxzvvvMO9994LnLlyPmDAAE6cOFHl1fcLofFeRET+jLJKM9NTdvLznjyWPXoNrg0c/9T+dFu7iIjIZZD8/a/szS1m+eYj1MRvtsvLy9mwYQPR0dHWNgcHB6Kjo0lNTa2yT2pqqk08QExMjDV+3759ZGdn28R4enoSERFxzn1eiH/961+4u7tzxx13nPVaSEgIPj4+XH/99fz8888XfQwREZHq2Jt7ikH/XMs7P+1jR/ZJ1uyo+q6zy+XPfeUvIiJST+zPK+Yf3/0KQOItXWj0J6+aXw55eXmYzWbrCiq/8fLyYseOHVX2yc7OrjL+t+e8f/t5vpiL8c4773DXXXfh5uZmbfPx8SE5OZnw8HDKysp4++236d+/P2lpafTo0aPK/ZSVlVFWVmb9vaio6KJzEhGR+mvJhkNM/SyTknIzTd0b8ModwUR39vrjjpdQzftkISIiUsMYhkHi51spr7TQN6gFN3fztndKtVpqairbt2/n3//+t017hw4d6NChg/X3qKgofv31V15//fWzYn+TlJTEs88+e1nzFRGRuutkaQWJn23l0/TDAPS+qhkz40Px9nS94rnotnYREZE/8GVmNj/sysXZ0YHnBnbFZDLZO6UqtWjRAkdHR3Jycmzac3Jy8Pau+gsFb2/v88b/9rM6+/wjb7/9NiEhIYSFhf1hbK9evdizZ885X580aRKFhYXW7eDBgxeVk4iI1D+bDhZwy+yf+DT9MI4OJsbfcDUfPtDbLoU5qDgXERE5r1NllTz3xTYAHu7fjsAWDe2c0bk5OzsTFhbG6tWrrW0Wi4XVq1cTGRlZZZ/IyEibeIBVq1ZZ4wMDA/H29raJKSoqIi0t7Zz7PJ9Tp07x8ccfM3LkyAuKz8jIwMfH55yvu7i44OHhYbOJiIicj8ViMO+HXxn0z7UcOF6CXxM3Pn6oN49dG4Sjg/2+gNdt7SIiIufxxje7yC4qpU0zd/7Wv5290/lDCQkJDB8+nPDwcHr16sXMmTMpLi5mxIgRAAwbNgw/Pz+SkpIAGDNmDP369WPGjBnExsaycOFC1q9fz7x58wAwmUyMHTuWF154gaCgIAIDA5k6dSq+vr7ExcVZj5uVlUV+fj5ZWVmYzWYyMjIAaN++PY0aNbLGLVq0iMrKSu65556zcp85cyaBgYF06dKF0tJS3n77bdasWcPXX399mc6WiIjUN7knyxi3eBM/7MoF4Kau3rx0e3c83RvYOTMV5yIiIue0I7uId3/eD8CzA7v86eVUroT4+Hhyc3NJTEwkOzubkJAQUlJSrBO6ZWVl4eDw+41zUVFRLFiwgClTpjB58mSCgoJYtmwZXbt2tcZMmDCB4uJiRo0aRUFBAX369CElJQVX199v+0tMTOT999+3/h4aGgrAt99+S//+/a3t77zzDrfffnuVS6WVl5czbtw4Dh8+jLu7O927d+ebb75hwIABl+r0iIhIPfbDrlwSPs4g71Q5rg0cmHZrF4b09K8xj6tpnXMREZEqWCwGd76ZyvoDJ7ixizfJ9/7x89HVpbHp0tM5FRGR/1VeaWHG1zt584e9AHTwasycu0IJ8mp82Y9dnXFJV85FRESqsGTjIdYfOIG7syOJt3a2dzoiIiJyEQ4cL+bxj9LZdKgQgHt7t+Xp2E418m44FeciIiL/o6CknKQvz6wLPua6IHybuP1BDxEREalplqUfZsqyTE6VVeLp1oCXB3Xnxq41dzlUFeciIiL/Y/pXO8kvLieoVSPu7xNo73RERESkGorLKkn8bCtLNh4CoFdAM2YOCanxX7arOBcREfkv6Vkn+GhdFgAvxHWlgaNWHRUREaktMg8XMvqjdPblFeNggsevC+KxAe1xqgXjuYpzERGR/2e2GExZlolhwO09/Ii4qrm9UxIREZELYBgG7/68n5e+3E6F2cDH05WZ8SG1aixXcS4iIvL/PvjPAbYeKcLD1YlJN3WydzoiIiJyAY6fKmP84k18u/PM2uU3dPZi+h3daeLubOfMqkfFuYiICHDsZCmvfrUTgCdv7EjLxi52zkhERET+yM978hi7KIPck2U4OzkwNbYT9/RuW2PWLq8OFeciIiLA31ds52RZJd1be3JXrzb2TkdERETOo8Js4fVVu/jn979iGBDUqhGz7wqlo/f51xKvyVSci4hIvbf21zyWZRzBZIIX47rh6FD7vm0XERGpLw7ml/D4wnTSswoAGNqrDYm3dMbNueatXV4dKs5FRKReK6+0MHVZJgD39m5Lt9aeds5IREREzuWLTUeYvHQLJ8sq8XB14qVB3bm5m4+907okVJyLiEi99vZPe/k1t5gWjZwZd0MHe6cjIiIiVSgpr+TZz7exaP1BAMLaNuWNISG0bupu58wuHRXnIiJSbx06UcKs1bsBeDq2E55uDeyckYiIiPyvbUeKGP3RRn7NLcZkgscGtGfMdUG1Yu3y6lBxLiIi9dazX2yjtMJCRGAz4kL87J2OiIiI/BfDMHh/7X7+vnIH5WYLXh4uvB4fQlS7FvZO7bJQcS4iIvXSN9tyWLUtBycHEy/Eda2VS66IiIjUVSeKy3nyk818sz0HgOhOrZh+RzDNGtautcur46LuA5g7dy4BAQG4uroSERHBunXrzhu/ePFiOnbsiKurK926dWPlypU2ry9dupQbbriB5s2bYzKZyMjIOOe+DMPgpptuwmQysWzZsotJX0RE6rnT5Wamfb4VgAf6XkWQV2M7ZyQiIiK/Sf31ODe98SPfbM/B2dGBZ27tzFvDwut0YQ4XUZwvWrSIhIQEpk2bxsaNGwkODiYmJoZjx45VGb927VqGDh3KyJEjSU9PJy4ujri4ODIzM60xxcXF9OnTh5dffvkPjz9z5kxd3RARkT9lzre7OVxwGl9PVx6/rr290xERERGg0mxhxtc7uevt/5BdVMpVLRvy6aNR3HdNYL2oAU2GYRjV6RAREUHPnj2ZM2cOABaLBX9/f0aPHs3EiRPPio+Pj6e4uJjly5db23r37k1ISAjJyck2sfv37ycwMJD09HRCQkLO2ldGRga33HIL69evx8fHh08//ZS4uLgLyruoqAhPT08KCwvx8Ki9C9OLiMifs+fYKW564wcqzAZv3htGTBdvu+WisenS0zkVEamdDp0oYczCDDYcOAHAneGteea2Lrg71+4nsaszLlXrynl5eTkbNmwgOjr69x04OBAdHU1qamqVfVJTU23iAWJiYs4Zfy4lJSXcddddzJ07F29v+32QEhGR2sswDBI/y6TCbHBtx1bc0NnL3imJiIjUe19uOcrNb/zIhgMnaOzixKyhoUy/I7jWF+bVVa13m5eXh9lsxsvL9sOMl5cXO3bsqLJPdnZ2lfHZ2dnVSvSJJ54gKiqKgQMHXlB8WVkZZWVl1t+LioqqdTwREal7Pt90hLW/HsfFyYFnbu1SL26RExERqalOl5t5bvk2PlqXBUCIfxNmDQmlTfO6s3Z5ddSKryI+//xz1qxZQ3p6+gX3SUpK4tlnn72MWYmISG1SVFrBCyu2A2fWR62vA7+IiEhNsDP7JKM/2siunFOYTPBwv3YkXH81DerY2uXVUa133qJFCxwdHcnJybFpz8nJOeet5t7e3tWKr8qaNWv49ddfadKkCU5OTjg5nflOYdCgQfTv37/KPpMmTaKwsNC6HTx48IKPJyIidc9rX+8i92QZgS0aMqrfVfZOR0REpF4yDIN//+cAt835iV05p2jZ2IV/3x/BUzd2rNeFOVSzOHd2diYsLIzVq1db2ywWC6tXryYyMrLKPpGRkTbxAKtWrTpnfFUmTpzI5s2bycjIsG4Ar7/+Ou+9916VfVxcXPDw8LDZRESkfso8XMi/UvcD8NzALrg4Odo3IRERkXqooKSchz/YwNRlmZRVWujfoSVfjulLn6AW9k6tRqj2be0JCQkMHz6c8PBwevXqxcyZMykuLmbEiBEADBs2DD8/P5KSkgAYM2YM/fr1Y8aMGcTGxrJw4ULWr1/PvHnzrPvMz88nKyuLI0eOALBz507gzFX3/97+V5s2bQgMDKz+uxYRkXrDYjGYsiwTiwG3dPehb1BLe6ckIiJS76zbl8/YhekcKSylgaOJp27syP3XBOLgoPlfflPt4jw+Pp7c3FwSExPJzs4mJCSElJQU66RvWVlZODj8fkE+KiqKBQsWMGXKFCZPnkxQUBDLli2ja9eu1pjPP//cWtwDDBkyBIBp06bxzDPPXOx7ExERYdH6g2QcLKCRixNTb+ls73RERETqFbPFYPaa3cxavRuLAYEtGjJ7aChd/TztnVqNU+11zmsrrXsqIlL/5BeXM+DV7yg8XUHiLZ25v0/NuttKY9Olp3MqIlJzHCk4zdhFGazblw/AoB6teXZgFxq51Ip5yS+J6oxL9eesiIhIvfPC8m0Unq6gk48HwyLb2jsdERGReuOrrdk8tWQzBSUVNHR25MW/diMu1M/eadVoKs5FRKRO+m7nMZamH8Zkgr//tStO9XwGWBERkSuhtMLMiyu28+//HACge2tPZg0JJaBFQztnVvOpOBcRkTqnuKySpz/NBGBEVCChbZraOSMREZG6b3fOSUZ/lM6O7JMAPPSXqxh3QwecnfQF+YVQcS4iInXOjK93cbjgNH5N3Bh3w9X2TkdERKROMwyDhb8c5NkvtlJaYaFFI2dm3BlCv6u1Qkp16CsMERGpU9KzTvDe2n0A/P32bjSsR5PO/Gbu3LkEBATg6upKREQE69atO2/84sWL6dixI66urnTr1o2VK1favG4YBomJifj4+ODm5kZ0dDS7d++2iXnxxReJiorC3d2dJk2aVHkck8l01rZw4UKbmO+++44ePXrg4uJC+/btmT9/frXfv4iIXDmFpyt4bEE6k5ZuobTCQt+gFqwc01eF+UVQcS4iInVGeaWFiUu2YBhwe6hfvfxgsGjRIhISEpg2bRobN24kODiYmJgYjh07VmX82rVrGTp0KCNHjiQ9PZ24uDji4uLIzMy0xkyfPp1Zs2aRnJxMWloaDRs2JCYmhtLSUmtMeXk5gwcP5pFHHjlvfu+99x5Hjx61bnFxcdbX9u3bR2xsLAMGDCAjI4OxY8fywAMP8NVXX/25kyIiIpfFhgP53PzGj6zYchQnBxOTburI+yN60aqxq71Tq5W0lJqIiNQZs1bv5rVVu2je0JlvEvrRtKGzvVM6r8sxNkVERNCzZ0/mzJkDgMViwd/fn9GjRzNx4sSz4uPj4ykuLmb58uXWtt69exMSEkJycjKGYeDr68u4ceMYP348AIWFhXh5eTF//nyGDBlis7/58+czduxYCgoKzjqWyWTi008/tSnI/9tTTz3FihUrbL4YGDJkCAUFBaSkpFzQ+9d4LyJy+ZktBv/8bg+vf7Mbs8WgTTN3Zg8NJdi/ib1Tq3GqMy7pyrmIiNQJe46dZM6aPQAk3tq5xhfml0N5eTkbNmwgOjra2ubg4EB0dDSpqalV9klNTbWJB4iJibHG79u3j+zsbJsYT09PIiIizrnP83n00Udp0aIFvXr14t133+W/rxH8US4iImJ/2YWl3PN2Gq9+vQuzxWBgiC8rHu+jwvwSqH8P4omISJ1jsRhMXLKFcrOFAR1acluwr71Tsou8vDzMZjNeXl427V5eXuzYsaPKPtnZ2VXGZ2dnW1//re1cMRfqueee49prr8Xd3Z2vv/6av/3tb5w6dYrHH3/8vLkUFRVx+vRp3NzcztpnWVkZZWVl1t+LioqqlZOIiFy41dtzGL94EydKKnB3duS5gV0Z1MMPk8lk79TqBBXnIiJS632YdoD1B07Q0NmRF/7aTR8SaqipU6da/zk0NJTi4mJeeeUVa3F+MZKSknj22WcvRXoiInIOpRVmXvpyB/PX7gegi68Hs4eGclXLRvZNrI7Rbe0iIlKrHSk4zcspOwGYcGNH/JqcfXW1vmjRogWOjo7k5OTYtOfk5ODt7V1lH29v7/PG//azOvu8UBERERw6dMh65ftcuXh4eFR51Rxg0qRJFBYWWreDBw/+qZxERMTWr7mnuP0fa62F+cg+gSz9W5QK88tAxbmIiNRahmEwdVkmp8oq6dGmCff2bmvvlOzK2dmZsLAwVq9ebW2zWCysXr2ayMjIKvtERkbaxAOsWrXKGh8YGIi3t7dNTFFREWlpaefc54XKyMigadOmuLi4XFAuVXFxccHDw8NmExGRP88wDD5ef5BbZv3EtqNFNGvozHv39WTqLZ1xcXK0d3p1km5rFxGRWmv55qOs3nGMBo4mXh7UHQcH3c6ekJDA8OHDCQ8Pp1evXsycOZPi4mJGjBgBwLBhw/Dz8yMpKQmAMWPG0K9fP2bMmEFsbCwLFy5k/fr1zJs3Dzgzw/rYsWN54YUXCAoKIjAwkKlTp+Lr62sz63pWVhb5+flkZWVhNpvJyMgAoH379jRq1IgvvviCnJwcevfujaurK6tWreLvf/+7dQZ4gIcffpg5c+YwYcIE7r//ftasWcPHH3/MihUrrszJExERAIpKK3j600y+2HQEgGvaN+f1O0No5aEl0i4nFeciIlIrnSgu55nPtwLw6ID2BHk1tnNGNUN8fDy5ubkkJiaSnZ1NSEgIKSkp1onWsrKycHD4/ca5qKgoFixYwJQpU5g8eTJBQUEsW7aMrl27WmMmTJhAcXExo0aNoqCggD59+pCSkoKr6+8f0hITE3n//fetv4eGhgLw7bff0r9/fxo0aMDcuXN54oknMAyD9u3b89prr/Hggw9a+wQGBrJixQqeeOIJ3njjDVq3bs3bb79NTEzMZTtfIiJiKz3rBI8vTOdg/mkcHUyMu+FqHv5LO30BfgVonXMREamVxn28iSUbD3G1VyOWj+6Ls1Pte1JLY9Olp3MqInJxLBaDN3/Yy4yvd1JpMWjd1I1ZQ0Pp0aapvVOr1aozLunKuYiI1Do/7s5lycZDmEyQdHv3WlmYi4iI1BTHikpJ+HgTP+3JA+CW7j78/fZueLg2sHNm9YuKcxERqVVKyiuZtHQLAMMjAwhrq2/0RURELta3O48x/uNNHC8ux62BI8/e1oXB4a21LKkdqDgXEZFa5bWvd3HoxGn8mrgxPqaDvdMRERGplcoqzbySspO3f9oHQEfvxsy5K5T2rTSHi72oOBcRkVpj08EC3v35zIeIF/7alUYuGsZERESqa19eMaM/2kjm4SIA7osKYOJNHXFtoCXS7EmfakREpFaoMFt4aslmLAbEhfgyoEMre6ckIiJS6yzZcIipn2VSUm6mqXsDXrkjmOjOXvZOS1BxLiIitcS8H/ayI/skTd0bMPWWzvZOR0REpFY5VVbJ1GWZfJp+GICIwGa8MSQUb0+tXV5TqDgXEZEa79fcU7yxejcAibd2pnkjFztnJCIiUntsPlTA6I/SOXC8BEcHE2OvC+JvA9rjqLXLaxQV5yIiUqNZLAaTlmyhvNJCv6tbEhfiZ++UREREagWLxeCdn/Yx/asdVJgN/Jq48caQEMIDmtk7NamCinMREanRPvoli3X783F3duTFv3bV0i4iIiIXIPdkGeMWb+KHXbkA3NTVm5du746nu9Yur6lUnIuISI2VXVjKSyt3APBkTAdaN3W3c0YiIiI13w+7ckn4eBN5p8pwcXJg2q1dGNrLX19w13AqzkVEpEYyDIMpyzI5WVZJiH8ThkUG2DslERGRGq280sKMVTt58/u9AHTwaszsu0K52ktrl9cGDhfTae7cuQQEBODq6kpERATr1q07b/zixYvp2LEjrq6udOvWjZUrV9q8vnTpUm644QaaN2+OyWQiIyPD5vX8/HxGjx5Nhw4dcHNzo02bNjz++OMUFhZeTPoiIlILrNySzTfbc2jgaOLlQd01aY2IiMh5HDhezODktdbC/N7ebfnssWtUmNci1S7OFy1aREJCAtOmTWPjxo0EBwcTExPDsWPHqoxfu3YtQ4cOZeTIkaSnpxMXF0dcXByZmZnWmOLiYvr06cPLL79c5T6OHDnCkSNHePXVV8nMzGT+/PmkpKQwcuTI6qYvIiK1QEFJOdM+PzNOPNK/PR289cFCRETkXD7LOEzsrJ/YdKgQT7cGJN8TxvNxXXFt4Gjv1KQaTIZhGNXpEBERQc+ePZkzZw4AFosFf39/Ro8ezcSJE8+Kj4+Pp7i4mOXLl1vbevfuTUhICMnJyTax+/fvJzAwkPT0dEJCQs6bx+LFi7nnnnsoLi7GyemP784vKirC09OTwsJCPDw8LuCdioiIvUz4ZBMfrz9E+1aNWPF4H1yc6uaHC41Nl57OqYjUJ8VllUz7fCufbDgEQM+ApswcEopfEzc7Zya/qc64VK0r5+Xl5WzYsIHo6Ojfd+DgQHR0NKmpqVX2SU1NtYkHiImJOWf8hfrtzZ2rMC8rK6OoqMhmExGRmu/nPXl8vP4QJhO8PKhbnS3MRURE/ozMw4XcOvsnPtlwCAcTjLkuiI8e7K3CvBarVnGel5eH2WzGy8vLpt3Ly4vs7Owq+2RnZ1cr/kLzeP755xk1atQ5Y5KSkvD09LRu/v7+F308ERG5Mk6Xm5m0dAtw5lm5sLZah1VEROS/GcaZtctv/8da9uYV4+PpykcP9uaJ66/GyfGiphSTGqLW/dsrKioiNjaWzp0788wzz5wzbtKkSRQWFlq3gwcPXrkkRUTkosz4eidZ+SX4eLryZEwHe6cjIiJSoxw/VcbI99fz/PJtlJstXN/Zi5WP9yXiqub2Tk0ugWotpdaiRQscHR3Jycmxac/JycHb27vKPt7e3tWKP5+TJ09y44030rhxYz799FMaNGhwzlgXFxdcXFyqfQwREbGPdfvyeefnfQC8+NeuNHY99994ERGR+mbtnjzGLsrg2MkynJ0cmBrbiXt6t9Xa5XVIta6cOzs7ExYWxurVq61tFouF1atXExkZWWWfyMhIm3iAVatWnTP+XIqKirjhhhtwdnbm888/x9XVtVr9RUSk5iouq2T84k0YBtwZ3pprO3r9cScREZF6oMJsYXrKDu5+J41jJ8to36oRnz92DfdGBqgwr2OqdeUcICEhgeHDhxMeHk6vXr2YOXMmxcXFjBgxAoBhw4bh5+dHUlISAGPGjKFfv37MmDGD2NhYFi5cyPr165k3b551n/n5+WRlZXHkyBEAdu7cCZy56u7t7W0tzEtKSvjggw9sJnhr2bIljo6aLEhEpDZ7ceV2svJL8GvixtRbOts7HRERkRrhYH4Jjy9MJz2rAIChvfxJvKULbs6qf+qiahfn8fHx5ObmkpiYSHZ2NiEhIaSkpFgnfcvKysLB4fcL8lFRUSxYsIApU6YwefJkgoKCWLZsGV27drXGfP7559biHmDIkCEATJs2jWeeeYaNGzeSlpYGQPv27W3y2bdvHwEBAdV9GyIiUkN8vyuXBWlZALwyuLtuZxcREQGWbz7CpCVbOFlWSWNXJ166vTux3X3snZZcRtVe57y20rqnIiI1T2FJBTfM/J6cojLuiwrgmdu62DulK0pj06WncyoitV1JeSXPfbGNhb+cmdA6rG1T3hgSQuum7nbOTC5Gdcalal85FxERuVSe+WIrOUVlXNWiIU/d2NHe6YiIiNjVtiNFjP5oI7/mFmMywaP92zM2OkhLpNUTKs5FRMQuUjKP8mn6YRxM8OqdwXp+TkRE6i3DMPhX6gFeXLmd8koLXh4uvB4fQlS7FvZOTa4gFeciInLF5Z0qY/KnmQA83K8dPdo0tXNGIiIi9nGiuJwnP9nMN9vPLD99XcdWvDI4mGYNne2cmVxpKs5FROSKMgyDyUu3kF9cTkfvxoyJDrJ3SiIiInaR+utxnliUQXZRKc6ODky+uSPDo7REWn2l4lxERK6oT9MP8/W2HBo4mnjtzhBcnHQ7u4iI1C+VZguzVu9m9rd7MAy4qmVDZg8NpYuvp71TEztScS4iIlfMkYLTTPt8KwBjo6+ms69m0xYRkfrl0IkSxi7MYP2BEwDcGd6aZ27rgruzSrP6Tv8FiIjIFWEYBk8t2czJ0kpC/Jvw0F+usndKIiIiV9SXW47y1JLNFJVW0tjFiRf+2pWBIX72TktqCM3JLyIiV8QHaVn8uDsPFycHZtwZrGVhLqO5c+cSEBCAq6srERERrFu37rzxixcvpmPHjri6utKtWzdWrlxp87phGCQmJuLj44ObmxvR0dHs3r3bJubFF18kKioKd3d3mjRpctYxNm3axNChQ/H398fNzY1OnTrxxhtv2MR89913mEyms7bs7OyLOxEiIjVEaYWZyZ9u4ZEPN1L0/19Sr3i8rwpzsaFPRiIictkdOF7M31dsB+CpGzvSrmUjO2dUdy1atIiEhASmTZvGxo0bCQ4OJiYmhmPHjlUZv3btWoYOHcrIkSNJT08nLi6OuLg4MjMzrTHTp09n1qxZJCcnk5aWRsOGDYmJiaG0tNQaU15ezuDBg3nkkUeqPM6GDRto1aoVH3zwAVu3buXpp59m0qRJzJkz56zYnTt3cvToUevWqlWrP3lWRETsZ2f2SW6b8xML0rKAM6uULH44kjbN3e2cmdQ0JsMwDHsncSUUFRXh6elJYWEhHh56xlFE5EoxWwzi30xl/YETRF7VnA8fiMDBQbPQwuUZmyIiIujZs6e16LVYLPj7+zN69GgmTpx4Vnx8fDzFxcUsX77c2ta7d29CQkJITk7GMAx8fX0ZN24c48ePB6CwsBAvLy/mz5/PkCFDbPY3f/58xo4dS0FBwR/m+uijj7J9+3bWrFkDnLlyPmDAAE6cOFHl1fcLofFeRGoKwzD4MC2L55dvo6zSQsvGLrx2ZzB9g1raOzW5gqozLunKuYiIXFbv/LSX9QdO0MjFiel3dFdhfhmVl5ezYcMGoqOjrW0ODg5ER0eTmppaZZ/U1FSbeICYmBhr/L59+8jOzraJ8fT0JCIi4pz7vFCFhYU0a9bsrPaQkBB8fHy4/vrr+fnnn//UMURE7KGgpJxHPtjIlGWZlFVa6N+hJV+O6avCXM5LE8KJiMhlsyvnJK9+tQuAqbd0wr+ZbuG7nPLy8jCbzXh5edm0e3l5sWPHjir7ZGdnVxn/23Pev/08X8zFWLt2LYsWLWLFihXWNh8fH5KTkwkPD6esrIy3336b/v37k5aWRo8eParcT1lZGWVlZdbfi4qKLjonEZFLYd2+fMYuTOdIYSkNHE08dWNH7r8mUF9Oyx9ScS4iIpdFhdlCwscZlJstXNuxFXeG+9s7JakhMjMzGThwINOmTeOGG26wtnfo0IEOHTpYf4+KiuLXX3/l9ddf59///neV+0pKSuLZZ5+97DmLiPwRs8Vgzpo9vLF6FxYDApq7M3toD7q11trlcmF0W7uIiFwWc9bsIfNwEU3cG/DS7d0wmXTF4HJr0aIFjo6O5OTk2LTn5OTg7e1dZR9vb+/zxv/2szr7PJ9t27Zx3XXXMWrUKKZMmfKH8b169WLPnj3nfH3SpEkUFhZat4MHD1Y7JxGRP+to4WmGvvUfXv/mTGF+ew8/lj/eV4W5VIuKcxERueQ2HypgzrdnCqrnB3allYernTOqH5ydnQkLC2P16tXWNovFwurVq4mMjKyyT2RkpE08wKpVq6zxgYGBeHt728QUFRWRlpZ2zn2ey9atWxkwYADDhw/nxRdfvKA+GRkZ+Pj4nPN1FxcXPDw8bDYRkSvp663Z3PTGj6zbl09DZ0dejw/mtTtDaOSim5SlevRfjIiIXFKlFWYSPt6E2WIQ292HW4N97Z1SvZKQkMDw4cMJDw+nV69ezJw5k+LiYkaMGAHAsGHD8PPzIykpCYAxY8bQr18/ZsyYQWxsLAsXLmT9+vXMmzcPAJPJxNixY3nhhRcICgoiMDCQqVOn4uvrS1xcnPW4WVlZ5Ofnk5WVhdlsJiMjA4D27dvTqFEjMjMzufbaa4mJiSEhIcH6vLqjoyMtW56ZIGnmzJkEBgbSpUsXSktLefvtt1mzZg1ff/31FTp7IiIXrrTCzN9XbudfqQcA6N7ak1lDQglo0dDOmUltpeJcREQuqRlf72TPsVO0aOTCCwO72judeic+Pp7c3FwSExPJzs4mJCSElJQU64RuWVlZODj8fuNcVFQUCxYsYMqUKUyePJmgoCCWLVtG166//7ubMGECxcXFjBo1ioKCAvr06UNKSgqurr/fEZGYmMj7779v/T00NBSAb7/9lv79+/PJJ5+Qm5vLBx98wAcffGCNa9u2Lfv37wfOzDY/btw4Dh8+jLu7O927d+ebb75hwIABl+VciYhcrN05Jxn9UTo7sk8CMOovVzH+hg44O+nGZLl4WudcREQumXX78omfl4phwDvDw7muk9cfd6rHNDZdejqnInI5GYbBwl8O8uwXWymtsNCikTMz7gyh39VaIk2qVp1xSVfORUTkkiguq2T84k0YBtwZ3lqFuYiI1CmFpyuYvHQLK7YcBaBvUAtm3BlMq8aaV0UuDRXnIiJySfx95Xay8kvwa+LG1Fs62zsdERGRS2bDgXwe/yiDwwWncXIw8WRMBx7se5XWLpdLSsW5iIj8ad/vyuXDtCwAXrmjO41dG9g5IxERkT/PbDH453d7eP2b3ZgtBm2auTNraCgh/k3snZrUQSrORUTkTyksqeCpTzYDcF9UAFHtW9g5IxERkT8vp6iUsQszSN17HICBIb68ENdVX0DLZaPiXERE/pRnvthKdlEpV7VoyFM3drR3OiIiIn/a6u05jF+8iRMlFbg7O/LcwK4M6uGHyaTb2OXyUXEuIiIXLSXzKJ+mH8bBBK/eGYybs6O9UxIREbloZZVmklbuYP7a/QB08fVg1tBQ2rVsZN/EpF5QcS4iIhclu7CUSUu3APBwv3b0aNPUzhmJiIhcvF9zTzF6QTrbjhYBcP81gTx1UwdcnPTFs1wZDhfTae7cuQQEBODq6kpERATr1q07b/zixYvp2LEjrq6udOvWjZUrV9q8vnTpUm644QaaN2+OyWQiIyPjrH2Ulpby6KOP0rx5cxo1asSgQYPIycm5mPRFRORPMlsMxi5K50RJBV18PRgTHWTvlERERC6KYRh8vP4gt8z6iW1Hi2jW0Jl37wsn8dbOKszliqp2cb5o0SISEhKYNm0aGzduJDg4mJiYGI4dO1Zl/Nq1axk6dCgjR44kPT2duLg44uLiyMzMtMYUFxfTp08fXn755XMe94knnuCLL75g8eLFfP/99xw5coTbb7+9uumLiMgl8I9v9/Cfvfm4Ozsye2ioPryIiEitVFRawZiFGUz4ZDOnK8xEtWvOl2P6cm1HL3unJvWQyTAMozodIiIi6NmzJ3PmzAHAYrHg7+/P6NGjmThx4lnx8fHxFBcXs3z5cmtb7969CQkJITk52SZ2//79BAYGkp6eTkhIiLW9sLCQli1bsmDBAu644w4AduzYQadOnUhNTaV3795/mHdRURGenp4UFhbi4eFRnbcsIiL/Zf3+fOLn/QezxWDG4GAGhbW2d0q1lsamS0/nVEQuVHrWCR5fmM7B/NM4OphIuP5qHu7XDketXS6XUHXGpWpdOS8vL2fDhg1ER0f/vgMHB6Kjo0lNTa2yT2pqqk08QExMzDnjq7JhwwYqKips9tOxY0fatGlzzv2UlZVRVFRks4mIyJ9TUFLOmIUZmC0Gfw31U2EuIiK1jsVi8M/vfmVwcioH80/j18SNjx+K5NEB7VWYi11VqzjPy8vDbDbj5WV7m4eXlxfZ2dlV9snOzq5W/Ln24ezsTJMmTS54P0lJSXh6elo3f3//Cz6eiIiczTAMnlqymcMFpwlo7s7zcV3tnZKIiEi1HDtZyrB31/Fyyg4qLQax3X1YOaYvYW01qanY30VNCFcbTJo0icLCQut28OBBe6ckIlKrfZCWxVdbc2jgaGL20B40ctGCHyIiUnt8u/MYN838kZ/25OHawIGXB3VjztBQPN0a2Ds1EaCaS6m1aNECR0fHs2ZJz8nJwdvbu8o+3t7e1Yo/1z7Ky8spKCiwuXp+vv24uLjg4uJywccQEZFz25FdxPPLtwHw1I0d6dba084ZiYiIXJjySgvTU3bw9k/7AOjo3Zg5d4XSvlVjO2cmYqtaV86dnZ0JCwtj9erV1jaLxcLq1auJjIyssk9kZKRNPMCqVavOGV+VsLAwGjRoYLOfnTt3kpWVVa39iIhI9Z0uN/PYgnTKKy0M6NCS+68JtHdKIiIiF2RfXjGD/rnWWpjfFxXAskevUWEuNVK170lMSEhg+PDhhIeH06tXL2bOnElxcTEjRowAYNiwYfj5+ZGUlATAmDFj6NevHzNmzCA2NpaFCxeyfv165s2bZ91nfn4+WVlZHDlyBDhTeMOZK+be3t54enoycuRIEhISaNasGR4eHowePZrIyMgLmqldREQu3nPLt7Ln2ClaNXbh1cHBOGiyHBERqQWWbjzE1GWZFJebaeLegFfuCOb6zloiTWquahfn8fHx5ObmkpiYSHZ2NiEhIaSkpFgnfcvKysLB4fcL8lFRUSxYsIApU6YwefJkgoKCWLZsGV27/j6R0Oeff24t7gGGDBkCwLRp03jmmWcAeP3113FwcGDQoEGUlZURExPDP/7xj4t60yIicmFWbD7KR+sOYjLBzPgQmjfS40IiIlKznSqrZOqyTD5NPwxARGAzZg4JwcfTzc6ZiZxftdc5r6207qmISPUczC/h5lk/crK0kkcHtOPJmI72TqnO0dh06emcitRvWw4VMvqjjew/XoKDCcZGX60l0sSuqjMuaapdERE5S4XZwuML0zlZWkmPNk0YG321vVMSERE5J4vF4J2f9jH9qx1UmA38mrjxxpAQwgOa2Ts1kQum4lxERM7y+qpdpGcV0NjViTeGhNLAsc6uvCkiIrVc7skyxi/exPe7cgG4qas3L93eHU93LZEmtYuKcxERsfHznjz++f2vALw8qDv+zdztnJGIiEjVftydyxOLNpF3qgwXJwcSb+3MXb3aYDLpNnapfVSci4iIVd6pMsYuysAwYGivNtzczcfeKYmIiJylvNLCjFU7efP7vQBc7dWIOXf14GovLZEmtZeKcxERAc48rzfu403knizjaq9GJN7S2d4piYiInCXreAmjF6az6WABAPf0bsOU2M64NnC0b2Iif5KKcxERAeCdn/bx/a5cXJwcmD20B27O+pAjIiI1y2cZh3n600xOlVXi4erE9Du6c2NX3eUldYOKcxERYfOhAqZ/tQOAqbd0poO3bgsUEZGao7iskmmfb+WTDYcA6BnQlJlDQvFrorXLpe5QcS4iUs+dLK1g9EfpVJgNburqzd0RbeydkoiIiFXm4UIe/yidvXnFOJhg9LVBjL62PU5aSUTqGBXnIiL1mGEYTF2WyYHjJfg1ceOl27trhlsREakRDMPgvZ/389KXOyg3W/D2cGXmkBB6X9Xc3qmJXBYqzkVE6rElGw+zLOMIjg4mZg0N0ZqwIiJSIxw/VcaTn2xmzY5jAFzf2Yvpg7rTtKGznTMTuXxUnIuI1FN7c0+R+FkmAE9EBxHWtpmdMxIREYG1e/IYuyiDYyfLcHZyYGpsJ+7p3VZ3dkmdpwc1RETqobJKM6M/Sqek3EzkVc15pH97e6ckl9DcuXMJCAjA1dWViIgI1q1bd974xYsX07FjR1xdXenWrRsrV660ed0wDBITE/Hx8cHNzY3o6Gh2795tE/Piiy8SFRWFu7s7TZo0qfI4WVlZxMbG4u7uTqtWrXjyySeprKy0ifnuu+/o0aMHLi4utG/fnvnz51f7/YtI7VRhtvDKVzu4+500jp0so32rRnz26DXcGxmgwlzqBRXnIiL10Etf7mDrkSKaNXRm5pAQHB30oaeuWLRoEQkJCUybNo2NGzcSHBxMTEwMx44dqzJ+7dq1DB06lJEjR5Kenk5cXBxxcXFkZmZaY6ZPn86sWbNITk4mLS2Nhg0bEhMTQ2lpqTWmvLycwYMH88gjj1R5HLPZTGxsLOXl5axdu5b333+f+fPnk5iYaI3Zt28fsbGxDBgwgIyMDMaOHcsDDzzAV199dYnOjojUVAfzS7jzzVTmfvsrhgFDe/nz+WPX0MnHw96piVwxJsMwDHsncSUUFRXh6elJYWEhHh76n1xE6q/V23MY+f56AN69L5xrO3rZOaP663KMTREREfTs2ZM5c+YAYLFY8Pf3Z/To0UycOPGs+Pj4eIqLi1m+fLm1rXfv3oSEhJCcnIxhGPj6+jJu3DjGjx8PQGFhIV5eXsyfP58hQ4bY7G/+/PmMHTuWgoICm/Yvv/ySW265hSNHjuDldea/ueTkZJ566ilyc3NxdnbmqaeeYsWKFTZfDAwZMoSCggJSUlIu6P1rvBepfVZsPsrEpZs5WVpJY1cnXrq9O7HdtXa51A3VGZd05VxEpB7JLixl/OJNANx/TaAK8zqmvLycDRs2EB0dbW1zcHAgOjqa1NTUKvukpqbaxAPExMRY4/ft20d2drZNjKenJxEREefc57mO061bN2th/ttxioqK2Lp16wXlUpWysjKKiopsNhGpHUrKK5m4ZDOPLtjIydJKerRpwsrH+6owl3pLxbmISD1RabYwdlE6J0oq6OLrwVM3dbB3SnKJ5eXlYTabbQpgAC8vL7Kzs6vsk52dfd74335WZ5/VOc5/H+NcMUVFRZw+fbrK/SYlJeHp6Wnd/P39LzgnEbGf7UeLuHX2Tyz85SAmEzw2oD2LHorEv5m7vVMTsRsV5yIi9cQrX+/kP3vzcXd2ZPbQUFycHO2dksifNmnSJAoLC63bwYMH7Z2SiJyHYRj8K3U/A+f+zK+5xbRq7MKHIyMYH9OBBo4qTaR+01JqIiL1wIrNR3nz+70AvHJHMFe1bGTnjORyaNGiBY6OjuTk5Ni05+Tk4O3tXWUfb2/v88b/9jMnJwcfHx+bmJCQkAvOzdvb+6xZ43877n8fq6pcPDw8cHNzq3K/Li4uuLi4XHAeImI/J4rLefKTzXyz/cz/59d1bMUrg4NpprXLRQBdORcRqfN25ZzkyU/OPGf+0F+u0rN8dZizszNhYWGsXr3a2maxWFi9ejWRkZFV9omMjLSJB1i1apU1PjAwEG9vb5uYoqIi0tLSzrnPcx1ny5YtNrPGr1q1Cg8PDzp37nxBuYhI7fWfvce56Y0f+WZ7Ds6ODky7tTNvDw9XYS7yX3TlXESkDisqreChf2+gpNzMNe2b82SMnjOv6xISEhg+fDjh4eH06tWLmTNnUlxczIgRIwAYNmwYfn5+JCUlATBmzBj69evHjBkziI2NZeHChaxfv5558+YBYDKZGDt2LC+88AJBQUEEBgYydepUfH19iYuLsx43KyuL/Px8srKyMJvNZGRkANC+fXsaNWrEDTfcQOfOnbn33nuZPn062dnZTJkyhUcffdR65fvhhx9mzpw5TJgwgfvvv581a9bw8ccfs2LFiit3AkXkkqo0W5i1Zg9z1uzGYsBVLRoya2goXf087Z2aSI2j4lxEpI6yWAwSFmWwL68YvyZuzBoSipOe56vz4uPjyc3NJTExkezsbEJCQkhJSbFOtJaVlYWDw+//HURFRbFgwQKmTJnC5MmTCQoKYtmyZXTt2tUaM2HCBIqLixk1ahQFBQX06dOHlJQUXF1drTGJiYm8//771t9DQ0MB+Pbbb+nfvz+Ojo4sX76cRx55hMjISBo2bMjw4cN57rnnrH0CAwNZsWIFTzzxBG+88QatW7fm7bffJiYm5rKdLxG5fA4XnGbMR+msP3ACgMFhrXnmti40dFEJIlIVrXMuIlJHzVq9m9dW7cLZyYElD0fRrbWuUtQ0GpsuPZ1TkZohJfMoEz7ZTFFpJY1cnHjxr10ZGOJn77RErrjqjEv62kpEpA76dscxXv9mFwAvxnVVYS4iIldEaYWZ55dv48O0LACC/Zswe0gobZpriTSRP6LiXESkjtmfV8yYhekYBtzTuw2Dw7Xus4iIXH47s08y+qON7Mo5BcDD/dox7oartUSayAVScS4iUoeUlFfy8AcbKCqtpEebJiTe0sXeKYmISB1nGAYfpmXx/PJtlFVaaNnYhdfuDKZvUEt7pyZSq1zU11hz584lICAAV1dXIiIizlq39H8tXryYjh074urqSrdu3Vi5cqXN64ZhkJiYiI+PD25ubkRHR7N7926bmF27djFw4EBatGiBh4cHffr04dtvv72Y9EVE6iTDMHhqyRZ2ZJ+kRSMX/nlPGM5OulohIiKXT0FJOY98sJEpyzIpq7TQ7+qWfDmmrwpzkYtQ7U9tixYtIiEhgWnTprFx40aCg4OJiYmxWbf0v61du5ahQ4cycuRI0tPTiYuLIy4ujszMTGvM9OnTmTVrFsnJyaSlpdGwYUNiYmIoLS21xtxyyy1UVlayZs0aNmzYQHBwMLfccgvZ2dkX8bZFROqed37axxebjuDkYOKf9/TAy8P1jzuJiIhcpF/253PzGz+SsjWbBo4mpsR24r37etKikYu9UxOplao9W3tERAQ9e/Zkzpw5AFgsFvz9/Rk9ejQTJ048Kz4+Pp7i4mKWL19ubevduzchISEkJydjGAa+vr6MGzeO8ePHA1BYWIiXlxfz589nyJAh5OXl0bJlS3744Qf69u0LwMmTJ/Hw8GDVqlVER0f/Yd6avVVE6rK1v+Zx7zvrMFsMnr2tC8OjAuydklwAjU2Xns6pyOVnthjMWbOHN1bvwmJAQHN3Zg/toclHRapQnXGpWlfOy8vL2bBhg00x7ODgQHR0NKmpqVX2SU1NPat4jomJscbv27eP7OxsmxhPT08iIiKsMc2bN6dDhw7861//ori4mMrKSt58801atWpFWFhYlcctKyujqKjIZhMRqYuOFJxm9IJ0zBaD20P9GBbZ1t4piYhIHXW08DR3vfUfXv/mTGF+e6gfyx/vq8Jc5BKo1oRweXl5mM1mvLy8bNq9vLzYsWNHlX2ys7OrjP/tdvTffp4vxmQy8c033xAXF0fjxo1xcHCgVatWpKSk0LRp0yqPm5SUxLPPPludtyciUuuUVph55IMNHC8up7OPBy/+tRsmk8neaYmISB309dZsJizZTEFJBQ2dHXk+riu392ht77RE6oxaMVOQYRg8+uijtGrVih9//JF169YRFxfHrbfeytGjR6vsM2nSJAoLC63bwYMHr3DWIiKX3zOfb2XToUKauDfgzXvDcHN2tHdKIiJSx5RWmJn2WSaj/r2BgpIKuvl5suLxvirMRS6xal05b9GiBY6OjuTk5Ni05+Tk4O3tXWUfb2/v88b/9jMnJwcfHx+bmJCQEADWrFnD8uXLOXHihPU+/X/84x+sWrWK999/v8pn3V1cXHBx0WQUIlJ3fbQui4W/HMTBBLOGhOLfzN3eKYmISB2z59hJHluQzo7skwA82DeQJ2M6ajUQkcugWv9XOTs7ExYWxurVq61tFouF1atXExkZWWWfyMhIm3iAVatWWeMDAwPx9va2iSkqKiItLc0aU1JSciZZB9t0HRwcsFgs1XkLIiJ1QnrWCaZ9thWA8TEd+MvVWrJGREQuHcMwWLgui1tm//T/S3Q6M39ET56O7azCXOQyqdaVc4CEhASGDx9OeHg4vXr1YubMmRQXFzNixAgAhg0bhp+fH0lJSQCMGTOGfv36MWPGDGJjY1m4cCHr169n3rx5wJnnyceOHcsLL7xAUFAQgYGBTJ06FV9fX+Li4oAzBX7Tpk0ZPnw4iYmJuLm58dZbb7Fv3z5iY2Mv0akQEakdck+W8cgHGyk3W4jp4sUj/drZOyUREalDCk9XMPnTLazYfObx0b5BLZhxZzCtGmuJTpHLqdrFeXx8PLm5uSQmJpKdnU1ISAgpKSnWCd2ysrJsrnBHRUWxYMECpkyZwuTJkwkKCmLZsmV07drVGjNhwgSKi4sZNWoUBQUF9OnTh5SUFFxdz/wBaNGiBSkpKTz99NNce+21VFRU0KVLFz777DOCg4P/7DkQEak1KswWHl2wkeyiUtq1bMirg4M1AZyIiFwyGw6c4PGP0jlccBonBxPjYzowqu9VODhorBG53Kq9znltpXVPRaQueO6Lbbz78z4auTix7NFraN+qkb1Tkj9BY9Olp3MqcnHMFoPk73/ltVW7MFsM2jRzZ9bQUEL8m9g7NZFarTrjUrWvnIuIiH18lnGYd3/eB8CMO4NVmIuIyCWRU1TKE4syWPvrcQBuC/blxb92pbFrAztnJlK/qDgXEakFth0p4qklmwF4bEB7YrpUvUKGiIhIdazensP4xZs4UVKBu7Mjz97WhTvCWuuRKRE7UHEuIlLDFZSU89AH6ymtsPCXq1vyxPVX2zslERGp5coqzbz05Q7e+3k/AF18PZg1NJR2LXVXloi9qDgXEanBzBaDMQszOJh/Gv9mbswaEoKjJuUREZE/4dfcU4xekM62o0UAjLgmgIk3dcTFydHOmYnUbyrORURqsJnf7OL7Xbm4NnDgzXvCaeLubO+URESkljIMg082HGLa51spKTfTrKEzr9zRnes6edk7NRFBxbmISI21YvNRZq/ZA8BLt3ens69mnhYRkYtzsrSCpz/N5PNNRwCIatec1+ND8PLQ2uUiNYWKcxGRGmjDgXye+DgDgPuvCSQu1M++CYmISK2VcbCAxz9KJyu/BEcHEwnXX83D/drpMSmRGkbFuYhIDbM/r5gH/7WB8koL0Z28eDq2k71TEhGRWshiMZj3415e/WonlRYDvyZuzBoaSljbpvZOTUSqoOJcRKQGOVFczoj5v5BfXE43P09mDdUEcCIiUn3HTpYy7uNN/Lg7D4DY7j78/a/d8HTT2uUiNZWKcxGRGqK0wsyof69nX14xfk3ceOe+cNyd9WdaRESq57udxxj38SaOF5fj2sCBZ27tQnxPf61dLlLD6VOfiEgNYLEYTPhkM7/sP0FjFyfeG9GTVo01SY+IiFy48koLr3y1g7d+3AdAR+/GzLkrlPatGts5MxG5ECrORURqgNdW7eLzTUdwcjCRfG8YV3vpg5SIiFy4fXnFPP5ROlsOFwIwPLItk27uhGsDrV0uUluoOBcRsbOPfznInG/PLJn299u7cU37FnbOSEREapOlGw8xdVkmxeVmmrg3YPqg7tzQxdveaYlINak4FxGxox935zL50y0AjL62PXeG+9s5IxERqS1OlVWSuCyTpemHAYgIbMbMISH4eLrZOTMRuRgqzkVE7GRn9kn+9sFGKi0GA0N8Sbj+anunJCIitcSWQ4WM/mgj+4+X4GCCsdFX8+iA9lrhQ6QWc7B3AiIi9VFOUSkj3lvHybJKegU2Y/od3TWLrlwyc+fOJSAgAFdXVyIiIli3bt154xcvXkzHjh1xdXWlW7durFy50uZ1wzBITEzEx8cHNzc3oqOj2b17t01Mfn4+d999Nx4eHjRp0oSRI0dy6tQp6+vPPPMMJpPprK1hw4bWmPnz55/1uqurJkYU+W8Wi8FbP+zl9n/+zP7jJfh6urLooUgevy5IhblILafiXETkCisuq2Tk+79wpLCUq1o2ZN69Ybg4acIeuTQWLVpEQkIC06ZNY+PGjQQHBxMTE8OxY8eqjF+7di1Dhw5l5MiRpKenExcXR1xcHJmZmdaY6dOnM2vWLJKTk0lLS6Nhw4bExMRQWlpqjbn77rvZunUrq1atYvny5fzwww+MGjXK+vr48eM5evSozda5c2cGDx5sk4+Hh4dNzIEDBy7xGRKpvfJOlTFi/i+8uHI7FWaDG7t48+WYv9AzoJm9UxORS8BkGIZh7ySuhKKiIjw9PSksLMTDw8Pe6YhIPWW2GIz613pW7zhG84bOfPq3a2jT3N3eaYmdXI6xKSIigp49ezJnzhwALBYL/v7+jB49mokTJ54VHx8fT3FxMcuXL7e29e7dm5CQEJKTkzEMA19fX8aNG8f48eMBKCwsxMvLi/nz5zNkyBC2b99O586d+eWXXwgPDwcgJSWFm2++mUOHDuHr63vWcTdt2kRISAg//PADffv2Bc5cOR87diwFBQUX/f413ktd9ePuXJ5YtIm8U2W4ODmQeGtn7urVRnddidRw1RmXdOVcROQKMQyD577Yyuodx3BxcuCt4eEqzOWSKi8vZ8OGDURHR1vbHBwciI6OJjU1tco+qampNvEAMTEx1vh9+/aRnZ1tE+Pp6UlERIQ1JjU1lSZNmlgLc4Do6GgcHBxIS0ur8rhvv/02V199tbUw/82pU6do27Yt/v7+DBw4kK1bt573PZeVlVFUVGSzidQlFWYLSV9u59531pF3qoyrvRrx+WN9uDuirQpzkTpGxbmIyBXy7s/7eT/1ACYTzIwPoUebpvZOSeqYvLw8zGYzXl5eNu1eXl5kZ2dX2Sc7O/u88b/9/KOYVq1a2bzu5OREs2bNqjxuaWkpH374ISNHjrRp79ChA++++y6fffYZH3zwARaLhaioKA4dOnTO95yUlISnp6d18/fXigdSd2QdL+GO5FTe/H4vAHdHtOHzx/rQwbuxnTMTkctBs7WLiFwBKZnZvLBiGwCTburITd187JyRiP18+umnnDx5kuHDh9u0R0ZGEhkZaf09KiqKTp068eabb/L8889Xua9JkyaRkJBg/b2oqEgFutQJn2Uc5ulPMzlVVomHqxMvD+qusUOkjlNxLiJymWUcLGDsonQMA+7p3YYH+15l75SkjmrRogWOjo7k5OTYtOfk5ODt7V1lH29v7/PG//YzJycHHx8fm5iQkBBrzP9OOFdZWUl+fn6Vx3377be55ZZbzroa/78aNGhAaGgoe/bsOWeMi4sLLi4u592PSG1SXFbJM59vZfGGM3eM9Axoyswhofg10drlInWdbmsXEbmMDuaX8MD7v1BaYWFAh5Y8c2sXPSMol42zszNhYWGsXr3a2maxWFi9erXNFen/FhkZaRMPsGrVKmt8YGAg3t7eNjFFRUWkpaVZYyIjIykoKGDDhg3WmDVr1mCxWIiIiLDZ9759+/j222/PuqW9KmazmS1btth8KSBSl2UeLuTWOT+xeMMhHEzw+HVBfPRgbxXmIvWErpyLiFwmhSUV3PfeOvJOldPZx4PZd/XAyVHficrllZCQwPDhwwkPD6dXr17MnDmT4uJiRowYAcCwYcPw8/MjKSkJgDFjxtCvXz9mzJhBbGwsCxcuZP369cybNw8Ak8nE2LFjeeGFFwgKCiIwMJCpU6fi6+tLXFwcAJ06deLGG2/kwQcfJDk5mYqKCh577DGGDBly1kzt7777Lj4+Ptx0001n5f7cc8/Ru3dv2rdvT0FBAa+88goHDhzggQceuIxnTMT+DMPgvZ/389KXOyg3W/D2cGXmkBB6X9Xc3qmJyBWk4lxE5DIor7Tw0Afr+TW3GG8PV969ryeNXPQnVy6/+Ph4cnNzSUxMJDs7m5CQEFJSUqy3kGdlZeHg8PuXRFFRUSxYsIApU6YwefJkgoKCWLZsGV27drXGTJgwgeLiYkaNGkVBQQF9+vQhJSUFV1dXa8yHH37IY489xnXXXYeDgwODBg1i1qxZNrlZLBbmz5/Pfffdh6Oj41m5nzhxggcffJDs7GyaNm1KWFgYa9eupXPnzpf6NInUGMdPlfHkJ5tZs+PMoyHXd/Zi+qDuNG3obOfMRORKu6h1zufOncsrr7xCdnY2wcHBzJ49m169ep0zfvHixUydOpX9+/cTFBTEyy+/zM0332x93TAMpk2bxltvvUVBQQHXXHMN//znPwkKCrLZz4oVK3juuefYvHkzrq6u9OvXj2XLll1Qzlr3VESuFMMwGPfxJpamH6aRixOLH46kk4/+7sjZNDZdejqnUpus3ZPH2EUZHDtZhrOTA1NiO3Fvby2RJlKXXNZ1zhctWkRCQgLTpk1j48aNBAcHExMTc9ZEML9Zu3YtQ4cOZeTIkaSnpxMXF0dcXByZmZnWmOnTpzNr1iySk5NJS0ujYcOGxMTEUFpaao1ZsmQJ9957LyNGjGDTpk38/PPP3HXXXdVNX0Tksntj9W6Wph/G0cHE3Lt7qDAXEREbFWYLr3y1g7vfSePYyTLatWzIsr9dw7DIABXmIvVYta+cR0RE0LNnT+bMmQOcuUXN39+f0aNHM3HixLPi4+PjKS4uZvny5da23r17ExISQnJyMoZh4Ovry7hx4xg/fjwAhYWFeHl5MX/+fIYMGUJlZSUBAQE8++yzFzSBTFX0TbqIXAlLNhxi3OJNAPz9r924K6KNnTOSmkxj06Wncyo13cH8EsYsTGdjVgEAQ3r6k3hrZ9yd9eiTSF102a6cl5eXs2HDBqKjo3/fgYMD0dHRpKamVtknNTXVJh4gJibGGr9v3z6ys7NtYjw9PYmIiLDGbNy4kcOHD+Pg4EBoaKh1Ipn/vvouImJvP+/JY+LSzQA83K+dCnMREbGxYvNRbp71IxuzCmjs6sScu0J5aVB3FeYiAlRzQri8vDzMZvNZ65J6eXmxY8eOKvtkZ2dXGZ+dnW19/be2c8Xs3bsXgGeeeYbXXnuNgIAAZsyYQf/+/dm1axfNmjU767hlZWWUlZVZfy8qKqrOWxURqZb1+/N54P31VJgNYrv7MCGmg71TEhGRGuJ0uZlnv9jKwl8OAhDapgmzhoTi38zdzpmJSE1SK9b0sVgsADz99NMMGjSIsLAw3nvvPUwmE4sXL66yT1JSEp6entbN39//SqYsIvXIpoMF3PfeL5yuMNM3qAWv3RmMg4OeGRQREdh+tIhb5/zEwl8OYjLBowPa8fFDkSrMReQs1SrOW7RogaOjIzk5OTbtOTk5eHt7V9nH29v7vPG//TxfjI+PD4DNUiouLi5cddVVZGVlVXncSZMmUVhYaN0OHjx4oW9TROSCbT9axLB313GqrJKIwGbMuzccF6ezl4gSEZH6xTAM/pW6n4Fzf2bPsVO0auzChyMjeDKmIw0ca8X1MRG5wqr1l8HZ2ZmwsDBWr15tbbNYLKxevZrIyMgq+0RGRtrEA6xatcoaHxgYiLe3t01MUVERaWlp1piwsDBcXFzYuXOnNaaiooL9+/fTtm3bKo/r4uKCh4eHzSYicintOXaKe95Oo/B0BaFtmvDOfT1xc1ZhLiJS350oLmfUvzeQ+NlWyistXNuxFV+O6UtU+xb2Tk1EarBqzz6RkJDA8OHDCQ8Pp1evXsycOZPi4mJGjBgBwLBhw/Dz8yMpKQmAMWPG0K9fP2bMmEFsbCwLFy5k/fr1zJs3DwCTycTYsWN54YUXCAoKIjAwkKlTp+Lr60tcXBwAHh4ePPzww0ybNg1/f3/atm3LK6+8AsDgwYMvxXkQEamWA8eLufvt/3C8uJwuvh7MH9GLRi6a0EdEpL77z97jPLEog6OFpTg7OjDxpo6MuEZLpInIH6v2J8n4+Hhyc3NJTEwkOzubkJAQUlJSrBO6ZWVl4eDw+wX5qKgoFixYwJQpU5g8eTJBQUEsW7aMrl27WmMmTJhAcXExo0aNoqCggD59+pCSkoKrq6s15pVXXsHJyYl7772X06dPExERwZo1a2jatOmfef8iItV2uOA0d72VRk5RGVd7NeLfIyPwdGtg77RERMSOKs0WZq3Zw5w1u7EYcFWLhswaGkpXP097pyYitUS11zmvrbTuqYhcCseKSrnzzVT2Hy8hsEVDFj3Um1aNXf+4o0gVNDZdejqnYg+HC04zdmE6v+w/AcDgsNY8c1sXGuqOKpF6rzrjkv5iiIhcoOOnyrj77TT2Hy+hdVM3PnwgQoW5iEg9l5J5lKeWbKHwdAWNXJx48a9dGRjiZ++0RKQWUnEuInIBCksquPeddew+dgpvD1cWPNAb3yZu9k5LRETspLTCzPPLt/Fh2pmVg4JbezJraChtmze0c2YiUlupOBcR+QMnSysY9t46th0tokUjZz58MII2zbU+rYhIfbUr5ySPLdjIrpxTADzcrx0J11+Ns5OWSBORi6fiXETkPE6Xmxk5fz2bDhbQxL0BHzwQQbuWjeydloiI2IFhGCxYl8VzX2yjrNJCi0YuvB4fTN+glvZOTUTqABXnIiLnUFphZtS/17Nufz6NXZz49/0RdPTWBFMiIvVRYUkFE5du5svMbAD+cnVLZgwOpmVjFztnJiJ1hYpzEZEqlFdaePTDjfy4Ow93Z0fm39+Tbq21HI6ISH30y/58xnyUzpHCUho4mpgQ05GRfQJxcNDa5SJy6ag4FxH5H5VmC08symD1jmO4ODnw9vBwwto2s3daIiJyhZktBnO/3cPMb3ZhMSCguTuzhobSvXUTe6cmInWQinMRkf9isRhM+GQzK7YcpYGjiTfvDSOqXQt7pyUiIlfY0cLTjF2YQdq+fABuD/XjubiuNNLa5SJymeivi4jI/zMMg6eXZbI0/TCODibm3NWD/h1a2TstERG5wlZty+HJTzZRUFJBQ2dHno/ryu09Wts7LRGp41Sci4hwpjB/bvk2PlqXhckEr8eHENPF295piYjIFVRaYSZp5XbeTz0AQDe/M2uXB7bQ2uUicvmpOBcRAV79eifv/bwfgJcHdee2YF/7JiQiIlfUnmMneWxBOjuyTwLwYN9AnozpqLXLReSKUXEuIvXenDW7mfvtrwA8P7ALd4b72zkjERG5UgzD4OP1B3nm822crjDTvKEzr94ZzAA91iQiV5iKcxGp197+cS+vfr0LgKdv7sS9kQH2TUhERK6YwtMVTP50Cys2HwWgT/sWvBYfTKvGrnbOTETqIxXnIlJv/St1Py+s2A5AwvVX8+BfrrJzRiIicqVsOHCCMQvTOXTiNE4OJsbHdGBU36u0drmI2I2KcxGpdwzDYM6aPcxYdeaK+SP92zH62vZ2zkpERK4Es8Ug+ftfeW3VLswWA/9mbswaEkpom6b2Tk1E6jkV5yJSr1gsZ2Zln792PwCPX9ueJ66/GpNJV0pEROq6nKJSnliUwdpfjwNwW7AvL/y1Kx6uDeycmYiIinMRqUcqzBaeXLyJZRlHAJh2a2dGXBNo56xERORKWLMjh/GLN5NfXI5bA0eeG9iFO8Ja68tZEakxVJyLSL1wutzM3z7cwLc7c3FyMPHq4GDiQv3snZaIiFxmZZVmXv5yJ+/+vA+Azj4ezL4rlHYtG9k5MxERWyrORaTOKyypYOT7v7D+wAlcGzjwz7vDGNBRS+SIiNR1e3NPMfqjdLYeKQJgxDUBTLypIy5OjnbOTETkbA72TkBE5HI6VlRK/LxU1h84gYerEx+MjFBhLnXe3LlzCQgIwNXVlYiICNatW3fe+MWLF9OxY0dcXV3p1q0bK1eutHndMAwSExPx8fHBzc2N6Ohodu/ebROTn5/P3XffjYeHB02aNGHkyJGcOnXK+vr+/fsxmUxnbf/5z3+qlYvIhTAMg8XrD3LL7J/YeqSIpu4NeGd4ONNu7aLCXERqLBXnIlJnHThezKDktezIPkmrxi4seiiS8IBm9k5L5LJatGgRCQkJTJs2jY0bNxIcHExMTAzHjh2rMn7t2rUMHTqUkSNHkp6eTlxcHHFxcWRmZlpjpk+fzqxZs0hOTiYtLY2GDRsSExNDaWmpNebuu+9m69atrFq1iuXLl/PDDz8watSos473zTffcPToUesWFhZWrVxE/sjJ0grGLsrgyU82U1JuJvKq5qSM/QvXdfKyd2oiIudlMgzDsHcSV0JRURGenp4UFhbi4eFh73RE5DLbdqSIYe+uI+9UGW2bu/Pv+yNo09zd3mmJ2LgcY1NERAQ9e/Zkzpw5AFgsFvz9/Rk9ejQTJ048Kz4+Pp7i4mKWL19ubevduzchISEkJydjGAa+vr6MGzeO8ePHA1BYWIiXlxfz589nyJAhbN++nc6dO/PLL78QHh4OQEpKCjfffDOHDh3C19eX/fv3ExgYSHp6OiEhIVXm/ke5XAiN9/VbxsECHv8onaz8EhwdTCRcfzUP92uHo9YuFxE7qc64pCvnIlLnrNuXT/y8VPJOldHJx4PFD0eqMJd6oby8nA0bNhAdHW1tc3BwIDo6mtTU1Cr7pKam2sQDxMTEWOP37dtHdna2TYynpycRERHWmNTUVJo0aWItzAGio6NxcHAgLS3NZt+33XYbrVq1ok+fPnz++efVyqUqZWVlFBUV2WxS/1j+f+3yO/65lqz8EvyauPHxQ5E8OqC9CnMRqTVUnItInfLNthzufSeNk6WV9ApoxsJRvWnV2NXeaYlcEXl5eZjNZry8bG/f9fLyIjs7u8o+2dnZ543/7ecfxbRqZTuXg5OTE82aNbPGNGrUiBkzZrB48WJWrFhBnz59iIuLsynQ/yiXqiQlJeHp6Wnd/P39zxkrddOxk6UMf28dL325g0qLQWw3H1aO6UtY26b2Tk1EpFo0W7uI1BlLNhxiwpLNmC0G0Z1aMeeuHrg20MQ/IjVBixYtSEhIsP7es2dPjhw5wiuvvMJtt9120fudNGmSzX6LiopUoNcj3+/KZdzHGeSdKse1gQPP3NqF+J7+WrtcRGqli7pybo9ZYH9TVlZGSEgIJpOJjIyMi0lfROqgt3/cy7jFmzBbDG7v4cc/7wlTYS71TosWLXB0dCQnJ8emPScnB29v7yr7eHt7nzf+t59/FPO/E85VVlaSn59/zuPCmefj9+zZc8G5VMXFxQUPDw+bTeq+8koLf1+5neHvriPvVDkdvRuzfHQfhvRqo8JcRGqtahfn9poF9jcTJkzA19e3ummLSB1lGAavfLWDF1ZsB+CBPoG8ekcwDRz11I7UP87OzoSFhbF69Wprm8ViYfXq1URGRlbZJzIy0iYeYNWqVdb4wMBAvL29bWKKiopIS0uzxkRGRlJQUMCGDRusMWvWrMFisRAREXHOfDMyMvDx8bngXEQA9ucVM+ifa5n3w14AhkW2Zdmj19C+VWM7ZyYi8icZ1dSrVy/j0Ucftf5uNpsNX19fIykpqcr4O++804iNjbVpi4iIMB566CHDMAzDYrEY3t7exiuvvGJ9vaCgwHBxcTE++ugjm34rV640OnbsaGzdutUAjPT09AvOu7Cw0ACMwsLCC+4jIjVbpdliTFyy2Wj71HKj7VPLjbnf7jYsFou90xK5YJdjbFq4cKHh4uJizJ8/39i2bZsxatQoo0mTJkZ2drZhGIZx7733GhMnTrTG//zzz4aTk5Px6quvGtu3bzemTZtmNGjQwNiyZYs15qWXXjKaNGlifPbZZ8bmzZuNgQMHGoGBgcbp06etMTfeeKMRGhpqpKWlGT/99JMRFBRkDB061Pr6/PnzjQULFhjbt283tm/fbrz44ouGg4OD8e6771Yrlz+i8b5uW7rxoNF56pdG26eWG8HPfmV8lXnU3imJiJxXdcalaj1z/tsssJMmTbK2XcgssP/9LBicmXl12bJlwB/PAjtkyBDgzG1tDz74IMuWLcPdXbMui9R3ZZVmnliUwcot2TiY4MW/dmNorzb2TkvE7uLj48nNzSUxMZHs7GxCQkJISUmxTrSWlZWFg8Pvd5ZERUWxYMECpkyZwuTJkwkKCmLZsmV07drVGjNhwgSKi4sZNWoUBQUF9OnTh5SUFFxdf59s8cMPP+Sxxx7juuuuw8HBgUGDBjFr1iyb3J5//nkOHDiAk5MTHTt2ZNGiRdxxxx3VykXqp1NllSR+lsnSjYcB6BXYjDeGhODj6WbnzERELp1qFefnmwV2x44dVfa5FLPAGobBfffdx8MPP0x4eDj79+//w1zLysooKyuz/q6lVUTqjlNllTz07/X8vOc4zo4OvDEkhJu6+fxxR5F64rHHHuOxxx6r8rXvvvvurLbBgwczePDgc+7PZDLx3HPP8dxzz50zplmzZixYsOCcrw8fPpzhw4efO+kLzEXqny2HChn90Ub2Hy/BwQRjrruax67VEmkiUvfUitnaZ8+ezcmTJ22u2P+RpKQknn322cuYlYjYQ35xOSPeW8emQ4U0dHZk3rBwrmnfwt5piYjIJWaxGLz78z5eTtlBhdnA19OVN4aG0jOgmb1TExG5LKo1Y5K9ZoFds2YNqampuLi44OTkRPv27QEIDw8/57fwkyZNorCw0LodPHiwOm9VRGqgwwWnuSN5LZsOFdKsoTMLHuytwlxEpA7KO1XG/e//wgsrtlNhNojp4sXKMX1VmItInVat4txes8DOmjWLTZs2kZGRQUZGhnUptkWLFvHiiy9WeVwtrSJSt6RnneD2f/zM3txifD1d+fihSIL9m9g7LRERucR+2p3HTW/8yHc7c3FxcuCFuK4k3xNGE3dne6cmInJZVfu29oSEBIYPH054eDi9evVi5syZFBcXM2LECACGDRuGn58fSUlJAIwZM4Z+/foxY8YMYmNjWbhwIevXr2fevHnAmefYxo4dywsvvEBQUBCBgYFMnToVX19f4uLiAGjTxnaSp0aNGgHQrl07WrdufdFvXkRqh0W/ZDF12VbKzRaCWjXi/ft74dtEkwCJiNQlFWYLM77exZs//IphwNVejZg9tAcdvLVEmojUD9Uuzu01C6yI1D/llRae/WIrH6ZlARDTxYsZd4bQyKVWTJchIiIXKOt4CaMXprPpYAEAd0W0YWpsZ9ycHe2bmIjIFWQyDMOwdxJXQlFREZ6enhQWFuoWd5Fa4FhRKY98uJENB05gMsH4GzrwSL92OGh2XqlDNDZdejqntc/nm47w9NItnCyrxMPViZcHddcKHCJSZ1RnXNLlJxGpcTYcyOeRDzZy7GQZjV2dmDUklAEdW9k7LRERuYRKyiuZ9tlWFm84BEB426a8MTQUPz22JCL1lIpzEakxDMNgwbosnvl8KxVmg6u9GjHv3nACWjS0d2oiInIJbT1SyOiP0tmbW4zJBKMHtOfx64JwcqzWXMUiInWKinMRqRHKKs1M+2wrC385s+zhzd28eeWOYBrq+XIRkTrDMAzmr91P0sodlJsteHu48np8CJHtmts7NRERu9OnXhGxu+zCUh7+YAMZBwswmWBCTEce7ncVJpOeLxcRqSvyi8t5cvEmVu84BkB0Jy9euaM7TRtqiTQREVBxLiJ2tm5fPn/7cCN5p8rwdGvArKGh9Lu6pb3TEhGRS2jtr3mMXZjBsZNlODs58PTNnRgW2VZfwoqI/BcV5yJiF4Zh8O//HOC5L7ZRaTHo6N2YN+8No21zPV8uIlJXVJotzPxmN3O/24NhQLuWDZk9tAedfTWTvojI/1JxLiJXXGmFmSnLMvnk/2fovTXYl5cHdcPdWX+SRETqioP5JYxZmM7GrAIAhvT0J/HWzvpbLyJyDvrrKCJX1JGC0zz8wQY2HyrEwQSTburEA30DdWujiEgdsmLzUSYu3czJ0koauzjx99u7cWuwr73TEhGp0VSci8gVk/rrcR5bsJHjxeU0dW/A7KE96BPUwt5piYjIJXK63Mxzy7fy0bozK2+EtmnCrCGh+Ddzt3NmIiI1n4pzEbnsDMPgvZ/38+LK7ZgtBp19PHjz3jB9WBMRqUO2Hy1i9Efp7Dl2CpMJ/ta/HWOjr6aB1i4XEbkgKs5F5LI6XW5m8qdb+DT9MABxIb4k3d4dN2dHO2cmIiKXwm8TfL6wYjvllRZaNXbh9fgQrmmvO6NERKpDxbmIXDYH80t4+IMNbD1ShKODick3d+L+awL0fLmISB1RUFLOhE828/W2HAAGdGjJq4ODad7Ixc6ZiYjUPirOReSyWLH5KFOWbeFESQXNGjoz565QotrpKoqISF2Rtvc4YxdlcLSwFGdHBybe1JER+gJWROSiqTgXkUsqv7icqZ9lsmLzUQC6+XmSfG8Yfk3c7JyZiIhcCpVmC7PX7GH2mt1YDLiqRUNmDQ2lq5+nvVMTEanVVJyLyCXz1dZsnv50C3mnynF0MPFo/3Y8dm0Qzk6aDEhEpC44UnCasQszWLc/H4A7wlrz7G1daOiij5QiIn+W/pKKyJ9WUFLOM59vZVnGEQCCWjVixp3BdG/dxL6JiYjIJZOSmc1TSzZTeLqCRi5OvPjXrgwM8bN3WiIidYaKcxH5U1Zvz2Hi0i3knizDwQQP9WvH2OggXJw0G7uISF1QWmHmhRXb+OA/WQAEt/Zk1tBQ2jZvaOfMRETqFhXnInJRCk9X8PzybXyy4RAAV7VsyIzBwYS2aWrnzERE5FLZlXOS0QvS2ZlzEoCH+l3FuOs76HElEZHLQMW5iFTbdzuPMXHJFrKLSjGZ4IE+gYy7oQOuDXS1XESkLjAMgwXrsnjui22UVVpo0ciF1+4M5i9Xt7R3aiIidZaKcxG5YCdLK/j7yu18tO4gAAHN3Xl1cDDhAc3snJmIiFwqhSUVTFy6mS8zswH4y9UtmTE4mJaNtXa5iMjlpOJcRC7Iz3vymPDJZg4XnAbgvqgAnrqxI27OulouIlJXrN+fz5iFGRwuOE0DRxMTYjoysk8gDg5au1xE5HJTcS4i51VcVknSl9utEwH5N3PjlTuC6X1VcztnJiIil4rZYjD32z3M/GYXFgPaNndn9tBQrbohInIFaTYPETmn/+w9zo1v/GAtzO/t3ZaUMX9RYS5Sw82dO5eAgABcXV2JiIhg3bp1541fvHgxHTt2xNXVlW7durFy5Uqb1w3DIDExER8fH9zc3IiOjmb37t02Mfn5+dx99914eHjQpEkTRo4cyalTp6yvf/fddwwcOBAfHx8aNmxISEgIH374oc0+5s+fj8lkstlcXV3/5NmQP5JdWMrdb/+H11adKcz/GurHisf7qjAXEbnCVJyLyFlOl5t59outDJn3Hw7mn8aviRsfPhDB83FdaeiiG25EarJFixaRkJDAtGnT2LhxI8HBwcTExHDs2LEq49euXcvQoUMZOXIk6enpxMXFERcXR2ZmpjVm+vTpzJo1i+TkZNLS0mjYsCExMTGUlpZaY+6++262bt3KqlWrWL58OT/88AOjRo2yOU737t1ZsmQJmzdvZsSIEQwbNozly5fb5OPh4cHRo0et24EDBy7xGZL/tmpbDje+8QP/2ZuPu7Mjr90ZzOvxITTS33oRkSvOZBiGYe8kroSioiI8PT0pLCzEw8PD3umI1Fjr9+czfvEm9h8vAWBorzZMvrkjjV0b2DkzkbrncoxNERER9OzZkzlz5gBgsVjw9/dn9OjRTJw48az4+Ph4iouLbYrk3r17ExISQnJyMoZh4Ovry7hx4xg/fjwAhYWFeHl5MX/+fIYMGcL27dvp3Lkzv/zyC+Hh4QCkpKRw8803c+jQIXx9favMNTY2Fi8vL959913gzJXzsWPHUlBQcNHvX+P9hSmtMJO0cjvvp5758qOrnwezh/YgsIXWLhcRuZSqMy7pyrmIAFBSXsmLK7Yx+M1U9h8vwdvDlffv70XS7d1UmIvUEuXl5WzYsIHo6Ghrm4ODA9HR0aSmplbZJzU11SYeICYmxhq/b98+srOzbWI8PT2JiIiwxqSmptKkSRNrYQ4QHR2Ng4MDaWlp58y3sLCQZs1sV3s4deoUbdu2xd/fn4EDB7J169YLfPdyofYcO8Vf/7HWWpg/0CeQJY9EqTAXEbGziyrOr/SzbPv372fkyJEEBgbi5uZGu3btmDZtGuXl5ReTvoj8F7PFYNEvWfR/5Tve+nEfhgGDw1rz1RN/oZ/WsxWpVfLy8jCbzXh5edm0e3l5kZ2dXWWf7Ozs88b/9vOPYlq1amXzupOTE82aNTvncT/++GN++eUXRowYYW3r0KED7777Lp999hkffPABFouFqKgoDh06dM73XFZWRlFRkc0mVTOMM3/vb539E9uPFtG8oTPvjejJlFs64+KklTdEROyt2sW5PZ5l27FjBxaLhTfffJOtW7fy+uuvk5yczOTJky/ybYsIwPe7comd9SNPLdnCsZNltGnmzrv3hfPK4GA83XS1XEQuj2+//ZYRI0bw1ltv0aVLF2t7ZGQkw4YNIyQkhH79+rF06VJatmzJm2++ec59JSUl4enpad38/f2vxFuodYpKKxj9UTpPLdnC6Qozfdq34MsxfRnQodUfdxYRkSui2sX5a6+9xoMPPsiIESPo3LkzycnJuLu7W58X+19vvPEGN954I08++SSdOnXi+eefp0ePHtZn4QzDYObMmUyZMoWBAwfSvXt3/vWvf3HkyBGWLVsGwI033sh7773HDTfcwFVXXcVtt93G+PHjWbp06cW/c5F6bNuRIu59J43h765jR/ZJPN0aMCW2E6sS/sK1Hb3+eAciUiO1aNECR0dHcnJybNpzcnLw9vauso+3t/d543/7+Ucx//slfWVlJfn5+Wcd9/vvv+fWW2/l9ddfZ9iwYed9Pw0aNCA0NJQ9e/acM2bSpEkUFhZat4MHD553n/XRxqwT3PzGjyzffBQnBxNP3diRf93fi1YemglfRKQmqVZxbq9n2apS1XNq/023uYmc7WjhacYv3kTs7B/5cXcezo4OPNg3kB+eHMADfa/SbY0itZyzszNhYWGsXr3a2maxWFi9ejWRkZFV9omMjLSJB1i1apU1PjAwEG9vb5uYoqIi0tLSrDGRkZEUFBSwYcMGa8yaNWuwWCxERERY27777jtiY2N5+eWXbWZyPxez2cyWLVvw8fE5Z4yLiwseHh42m5xh+f+1ywcnp3LoxGn8m7mx+OFIHunfDgcHk73TExGR/1GtdTLO9yzbjh07quxzKZ5l+1979uxh9uzZvPrqq+fMNSkpiWefffb8b0iknjhVVsmb3//KWz/upbTCAsCtwb5MiOmAfzN3O2cnIpdSQkICw4cPJzw8nF69ejFz5kyKi4utz3YPGzYMPz8/kpKSABgzZgz9+vVjxowZxMbGsnDhQtavX8+8efMAMJlMjB07lhdeeIGgoCACAwOZOnUqvr6+xMXFAdCpUyduvPFGHnzwQZKTk6moqOCxxx5jyJAh1pnav/32W2655RbGjBnDoEGDrGO8s7Oz9cv25557jt69e9O+fXsKCgp45ZVXOHDgAA888MCVPIV1Qk5RKQkfZ/DznuPAmb/5L/61Kx6a4FNEpMaqdYtYHj58mBtvvJHBgwfz4IMPnjNu0qRJJCQkWH8vKirSc2hS71SaLSz85SAzv9lF3qkzEyj2DGjK5Js7EdqmqZ2zE5HLIT4+ntzcXBITE8nOziYkJISUlBTrl+BZWVk4OPx+41xUVBQLFixgypQpTJ48maCgIJYtW0bXrl2tMRMmTKC4uJhRo0ZRUFBAnz59SElJwdX199uiP/zwQx577DGuu+46HBwcGDRoELNmzbK+/v7771NSUkJSUpL1iwGAfv368d133wFw4sQJHnzwQbKzs2natClhYWGsXbuWzp07X67TVSd9u+MY4xZvIr+4HLcGjjw7sAuDw1pjMulquYhITVatdc7Ly8txd3fnk08+sX5bDjB8+HAKCgr47LPPzurTpk0bEhISGDt2rLVt2rRpLFu2jE2bNrF3717atWtHeno6ISEh1ph+/foREhLCG2+8YW07cuQI/fv3p3fv3syfP9/mw8Uf0bqnUp8YhsHq7cdI+nI7v+YWAxDYoiETb+rIDZ299AFNpIbQ2HTp1edzWlZp5uUvd/Luz/sA6OTjweyhobRv1cjOmYmI1F+XbZ1zez3LBmeumPfv35+wsDDee++9ahXmIvXJlkOFDH3rPzzwr/X8mltMs4bOPHtbF75+4i/EdPFWYS4iUgftzT3F7f9Yay3MR1wTwKd/i1JhLiJSi1T7tnZ7PMv2W2Hetm1bXn31VXJzc635nGv2WZH65tCJEl79aifLMo4A4OzkwMg+gTzSv52eMRQRqaMMw2DJxsMkfpZJSbmZpu4NeHVwMNd10sobIiK1TbWLc3s8y7Zq1Sr27NnDnj17aN26tU0+1bgrX6ROKjxdwT++28N7P++nvPLMZG+3h/oxLqYDfk3c7JydiIhcLidLK5iyLJPP/v9L2d5XNWNmfCjenloiTUSkNqrWM+e1WX1+Bk3qpvJKCwvSDvDG6t2cKKkAIPKq5ky+uRPdWnvaOTsRuRAamy69+nJONx0sYPRH6WTll+DoYOKJ6CAe6d8eRy2RJiJSo1RnXKp1s7WL1Hf5xeUsSDvAv1IPcOxkGQDtWzVi8s0dGdChlZ4pFxGpwywWg7d+3MsrX+2k0mLg18SNWUNDCGvbzN6piYjIn6TiXKSW2JVzkvd+3sfSjYcp+//b11s1dmFMdBDx4f44OWqSRBGRuuzYyVLGfbyJH3fnAXBzN2+Sbu+Op5vmFRERqQtUnIvUYBaLwfe7c3n3p33WD2MA3fw8GdknkJu7+eDspKJcRKSu+35XLuM+ziDvVDmuDRyYdmsXhvT0191SIiJ1iIpzkRrodLmZJRsP8d7P+6zrlDuY4IbO3tzfJ5CeAU31gUxEpB4or7Tw6tc7mffDXgA6ejdm9tBQgrwa2zkzERG51FSci9Qg2YWlvJ+6n4/WZVHw/5O8NXJxIr6nP/dFBeDfzN3OGYqIyJWyP6+Yxxems/lQIQDDItsy+eZOuDZwtHNmIiJyOag4F6kBNh0s4J2f9rFyy1EqLWcWUGjTzJ37ogIYHN6axlqnXESkXlmWfpinP91CcbkZT7cGTL+jOzFdvO2dloiIXEYqzkXspNJs4ettObzz0z42HDhhbY8IbMb9fQKJ7uSlJXFEROqZU2WVJH6WydKNhwHoFdiMmfEh+DZxs3NmIiJyuak4F7nCCk9X8PEvB5m/dj+HC04D0MDRxK3Bvtx/TSBd/bRGuYhIfbTlUCGPL0xnX14xDiYYc93VPHat1i4XEakvVJyLXCF7jp3kg/9ksXj9QYrLzQA0a+jMPRFtuKd3W1p5uNo5QxERsQeLxeDdn/fxcsoOKswGvp6uzBwSSq9ArV0uIlKfqDgXuYx+zT3Fis1HWbH5KDtzTlrbr/ZqxMg+gQwM8dPEPiIi9VjeqTLGL97EdztzAYjp4sXLg7rTxN3ZzpmJiMiVpuJc5BLbm3uKlVuOsnzzUXZk/16QN3A00e/qVtwXFcA17ZtrKTQRkXrup915PPFxBrkny3BxcmDqLZ25O6KNxgcRkXpKxbnIJbA/r5gVW85cId92tMja7uRgom9QC2K7+3J9Zy883TTruohIfVdhtvDaql0kf/8rhgFBrRox564edPDW2uUiIvWZinORi3Tg+O8F+dYjtgX5Ne1bENvdhxs6e+nWRBERsTqYX8Loj9LJOFgAwF0RbZga2xk3Zz3iJCJS36k4F6mGg/kl1oJ8y+FCa7ujg4mods25pbsPN3T2pmlDFeQiImLr801HeHrpFk6WVeLh6sRLg7pzczcfe6clIiI1hIpzkT9w6EQJK/+/IN906PeC3MEEUe3OXCGP6eJNMxXkIiJShZLySp75fCsfrz8EQHjbpswcEkLrpu52zkxERGoSFeci/6PCbCHzcCH/2ZvPV1uzrbcewpmCvPdVzYnt7sONXbxp3sjFfomKiEiNt/VIIaM/SmdvbjEmE4we0J7HrwvCydHB3qmJiEgNo+Jc6r2ySjObDxWStvc4afvy2XDgBCX/vw45gMkEEYHNiO3uy41dvGnZWAW5iIicn2EYzF+7n6SVOyg3W/DycGFmfCiR7ZrbOzUREamhVJxLvVNaYWZj1gnS9uaTtu846VkFlFVabGI8/6+9e49q6kz3B/4NlySoQFRKAEUE6q3ea2sGreNMZcTLVJnOGZVxUdvacVYHZ9Vap9LpobS1U6k6nf7qOOo4Ks6vS9F6Wp2lHh28oKMi9qBWrRRFUUdL8FSFRC5yyXP+sEQDBNgUskP8ftbKItl59s7zvu/OfvPskMTPFyMju2FMnyBMGBSCYH+9StkSEVFHc6usCq9v/RJ7824AAGIHBGPJfwzlx5+IiKhJLM7J45XdrUHuldvIKbyJnEu38OW1ElTXikNM985amKK6YWTvbjBFdUc/oz+8vPg7s0REpMzRi9/i1c2nUGy5C62PF96cNADPxUTwt8uJiKhZLM7J45RWVCP3yi3kXLqFY4W3cPZ6KWptjsW4MUAHU2R3jIzshh9EdUP0I134womIiFqtptaG/7fvAv58oAAiQPQjnbE84XE8FhagdmpERNRBsDinDu3O3RqcL7biQrEVeUVWfHH5Fs4VWSCOtTh6GPxgiuoGU2Q3mCK7I6J7JxbjRETUJq7dLscrGaeQe+U2AGD6E+FInfIYOmn5MouIiFqOswZ1CBVVtSi4cQfni60PXO7geklFo/G9u3eyvzNuiurGn6shIqJ2setMERb+12lYK2vgr/PB+88OxjNDw9ROi4iIOiAW5+RW7tbU4tL/ljkU4OeLrbh6q7zBu+F1gv116Gv0Rx9jFwwLN8AU2R0hgfwCNyIiaj8VVbV4d8c5bDp+FQAwvJcBH88YjvBuPBlMREStw+KcVFFRVYvrJeX24vt8sRX5Zisu3yxv8PnwOl07+aKv0R/9QvzRx+iPfkZ/9DV2gaETv/2WiIhc52uzBXM3nkTBjTvQaICXx0bj1Z/0hS9/u5yIiL4HFufU5sru1qCotAJFpZUoKq2E2f733jKzpRIl5dVO1/fX+6Cfsa4A7/Ldu+L+COqi5efEiYhINSKCT45dwaKdeaiqsSHYX4c/TR+G0Y8GqZ0aERF5gFad4l2xYgV69+4NvV4Pk8mE48ePNxn/6aefon///tDr9Rg8eDB27drlcL+I4K233kJoaCj8/PwQGxuLCxcuOMTcunULM2fOREBAAAwGA2bPno07d+60Jn1qJRGBpbIa+WYrsvJvIOP4Vfwp8zwWbj2N59Ydx/g/HcTgt/dgYOoexH54CIlrj+P1rafxYeZ5bDp+FQfy/xdfm632wryLzgdDww34xYie+M/JA7DhxZE49sY4nE4dj60vj8LiZwfj+dGRGPVoEB7x17EwJyJqIXedp0+fPo0xY8ZAr9cjPDwcS5YsUZyLWkrKq/Dr/5+LlO1foarGhh/3ewT//coYFuZERNRmFL9zvnnzZsyfPx+rVq2CyWTCRx99hLi4OOTn5yM4OLhB/NGjR5GQkIDFixfjpz/9KTZu3Ij4+HicOHECgwYNAgAsWbIEH3/8MTZs2IDIyEikpKQgLi4O586dg15/77PDM2fORFFRETIzM1FdXY0XXngBc+bMwcaNG79nFzxcRATlVbUoraiGpbIalooaWL67Xlrx3e3K6kaX3S6rQllVbYsex1/vg9BAPUIC/RAaoEeoQX//dqAeIYF6+Ot8WHATEbUxd52nLRYLxo8fj9jYWKxatQpnzpzBiy++CIPBgDlz5rQ4FzXkXLqJeZtPoai0Er7eGiRPHIAXR/fmHEZERG1KI+Lsa7YaZzKZ8OSTT+LPf/4zAMBmsyE8PBy//e1vkZyc3CB++vTpKCsrw44dO+zLfvCDH2DYsGFYtWoVRARhYWF47bXXsGDBAgBAaWkpjEYj0tPTMWPGDOTl5eGxxx7DF198gSeeeAIAsHv3bkyaNAnXrl1DWFjz34pqsVgQGBiI0tJSBAS432+OighqbILK6lrcrbHduzi7XlOLu9UPXK+x3V+v+v6y8qoaWCpqHijEq2GprHH6me6WMnTyRUhAw2I7NFCP0EA/hATq0UXHT0wQETWnPeYmd52nV65ciTfffBNmsxla7b3vCklOTsa2bdvw9ddftyiXlmjLPq2ptWH5/gIs338BNgEigzpjecJwDOoR+L22S0REDw8l85KiCqqqqgq5ubl444037Mu8vLwQGxuL7OzsRtfJzs7G/PnzHZbFxcVh27ZtAIDCwkKYzWbExsba7w8MDITJZEJ2djZmzJiB7OxsGAwG+4QPALGxsfDy8kJOTg5+9rOfNXjcu3fv4u7du/bbFotFSVObtDX3GjKOX0WtCGw2Qa0Iam2wX7+/7P51m6Dx5TbYl7mSj5cGgX6+CPDzRYDe595fP18E6H0R4Ofz3V/fezHf3W/w80VIoJ6/20pE5KbceZ7Ozs7GD3/4Q3thXvc4H3zwAW7fvo2uXbs2m4srfVNSgXkZp3D88i0AwM8f74l3pw5EZ558JiKidqJohvn2229RW1sLo9HosNxoNNrPetdnNpsbjTebzfb765Y1FVP/X/F8fHzQrVs3e0x9ixcvxjvvvNPCliljLq3A/1y53S7brqP18YLOxws6H+97f33vX9f7Prj8u791sb73r3fSen9XgN8vtgP09wpuva8X/x2PiMjDuPM8bTabERkZ2WAbdfd17dq12Vwa014n4/OKLDh++Ra66HzwXvwgxA/v0SbbJSIicsZjT/++8cYbDmffLRYLwsPD22TbEwaF4tHgLvDSaODtpYGXlwbedde/++vtBXhp7t+uv9xx2b3rdQW51tsLXl4snImIiJrTXifjxw0wIvWZx/B0/2BEdO/c5tsnIiKqT1FxHhQUBG9vbxQXFzssLy4uRkhISKPrhISENBlf97e4uBihoaEOMcOGDbPH3Lhxw2EbNTU1uHXrltPH1el00Ol0LW+cAo8Gd8GjwV3aZdtERESt5c7ztLPHefAxmsulMe15Mv6F0ZHNBxEREbURRT+lptVqMWLECOzbt8++zGazYd++fYiJiWl0nZiYGId4AMjMzLTHR0ZGIiQkxCHGYrEgJyfHHhMTE4OSkhLk5ubaY/bv3w+bzQaTyaSkCURERB7LnefpmJgYHDp0CNXV1Q6P069fP3Tt2rVFuTRGp9MhICDA4UJERNQhiUIZGRmi0+kkPT1dzp07J3PmzBGDwSBms1lERBITEyU5Odkef+TIEfHx8ZFly5ZJXl6epKamiq+vr5w5c8Yek5aWJgaDQbZv3y6nT5+WqVOnSmRkpFRUVNhjJkyYIMOHD5ecnBw5fPiw9OnTRxISElqcd2lpqQCQ0tJSpU0mIiJqF+0xN7nrPF1SUiJGo1ESExPl7NmzkpGRIZ06dZLVq1cryqU5nO+JiMidKJmXFBfnIiLLly+XXr16iVarlZEjR8qxY8fs940dO1ZmzZrlEL9lyxbp27evaLVaGThwoOzcudPhfpvNJikpKWI0GkWn08m4ceMkPz/fIebmzZuSkJAgXbp0kYCAAHnhhRfEarW2OGdO1kRE5G7aa25y13n6yy+/lKeeekp0Op306NFD0tLSGuTeXC7N4XxPRETuRMm8pPh3zjsqd/+dcyIievhwbmp77FMiInInSuYlRZ85JyIiIiIiIqK2x+KciIiIiIiISGUszomIiIiIiIhUxuKciIiIiIiISGUszomIiIiIiIhUxuKciIiIiIiISGU+aifgKnW/GGexWFTOhIiI6J66Oekh+VVTl+B8T0RE7kTJXP/QFOdWqxUAEB4ernImREREjqxWKwIDA9VOwyNwviciInfUkrleIw/J6XqbzYZvvvkG/v7+0Gg033t7FosF4eHh+Pe//93sj8m7M09pB+A5bfGUdgCe0xZPaQfAtrgbEYHVakVYWBi8vPhJs7bQlvO9J+xjrsY+U459pgz7Szn2mXJt2WdK5vqH5p1zLy8v9OzZs823GxAQ4BE7uae0A/CctnhKOwDPaYuntANgW9wJ3zFvW+0x33f0fUwN7DPl2GfKsL+UY58p11Z91tK5nqfpiYiIiIiIiFTG4pyIiIiIiIhIZSzOW0mn0yE1NRU6nU7tVL4XT2kH4Dlt8ZR2AJ7TFk9pB8C2ECnBfUw59ply7DNl2F/Ksc+UU6vPHpovhCMiIiIiIiJyV3znnIiIiIiIiEhlLM6JiIiIiIiIVMbinIiIiIiIiEhlLM6JiIiIiIiIVMbi3IkVK1agd+/e0Ov1MJlMOH78eJPxn376Kfr37w+9Xo/Bgwdj165dLsrUucWLF+PJJ5+Ev78/goODER8fj/z8/CbXSU9Ph0ajcbjo9XoXZezc22+/3SCv/v37N7mOO45J7969G7RDo9EgKSmp0Xh3Go9Dhw7hmWeeQVhYGDQaDbZt2+Zwv4jgrbfeQmhoKPz8/BAbG4sLFy40u12lz7W20FRbqqursXDhQgwePBidO3dGWFgYnnvuOXzzzTdNbrM1+2h7tgMAnn/++QY5TZgwodntutuYAGj0eaPRaLB06VKn21RjTKjj8YT53tWU9NmaNWswZswYdO3aFV27dkVsbKxLjinupLXH1IyMDGg0GsTHx7dvgm5IaZ+VlJQgKSkJoaGh0Ol06Nu370P33FTaZx999BH69esHPz8/hIeH49VXX0VlZaWLslVfc687GpOVlYXHH38cOp0Ojz76KNLT09s8Lxbnjdi8eTPmz5+P1NRUnDhxAkOHDkVcXBxu3LjRaPzRo0eRkJCA2bNn4+TJk4iPj0d8fDzOnj3r4swdHTx4EElJSTh27BgyMzNRXV2N8ePHo6ysrMn1AgICUFRUZL9cuXLFRRk3beDAgQ55HT582Gmsu47JF1984dCGzMxMAMAvfvELp+u4y3iUlZVh6NChWLFiRaP3L1myBB9//DFWrVqFnJwcdO7cGXFxcU0e6JU+19pKU20pLy/HiRMnkJKSghMnTuCzzz5Dfn4+pkyZ0ux2leyjbaG5MQGACRMmOOS0adOmJrfpjmMCwKENRUVFWLduHTQaDX7+8583uV1Xjwl1LJ4y37uS0j7LyspCQkICDhw4gOzsbISHh2P8+PG4fv26izNXR2uPqZcvX8aCBQswZswYF2XqPpT2WVVVFX7yk5/g8uXL2Lp1K/Lz87FmzRr06NHDxZmrR2mfbdy4EcnJyUhNTUVeXh7Wrl2LzZs34/e//72LM1dPS15DPaiwsBCTJ0/Gj3/8Y5w6dQrz5s3DSy+9hD179rRtYkINjBw5UpKSkuy3a2trJSwsTBYvXtxo/LRp02Ty5MkOy0wmk/z6179u1zyVunHjhgCQgwcPOo1Zv369BAYGui6pFkpNTZWhQ4e2OL6jjMkrr7wi0dHRYrPZGr3fXccDgHz++ef22zabTUJCQmTp0qX2ZSUlJaLT6WTTpk1Ot6P0udYe6relMcePHxcAcuXKFacxSvfRttZYO2bNmiVTp05VtJ2OMiZTp06Vp59+uskYtceE3J+nzvft6fseI2pqasTf3182bNjQXim6ldb0V01NjYwaNUr+9re/teo43tEp7bOVK1dKVFSUVFVVuSpFt6O0z5KSkhrMofPnz5fRo0e3a57uqiWvO15//XUZOHCgw7Lp06dLXFxcm+bCd87rqaqqQm5uLmJjY+3LvLy8EBsbi+zs7EbXyc7OdogHgLi4OKfxaiktLQUAdOvWrcm4O3fuICIiAuHh4Zg6dSq++uorV6TXrAsXLiAsLAxRUVGYOXMmrl696jS2I4xJVVUVPvnkE7z44ovQaDRO49x1PB5UWFgIs9ns0OeBgYEwmUxO+7w1zzW1lJaWQqPRwGAwNBmnZB91laysLAQHB6Nfv354+eWXcfPmTaexHWVMiouLsXPnTsyePbvZWHccE3IPnjzft5e2OEaUl5ejurq62dcinqC1/fXuu+8iODi4Rcc4T9OaPvvHP/6BmJgYJCUlwWg0YtCgQXj//fdRW1vrqrRV1Zo+GzVqFHJzc+3/+n7p0iXs2rULkyZNcknOHZGrjv8szuv59ttvUVtbC6PR6LDcaDTCbDY3uo7ZbFYUrwabzYZ58+Zh9OjRGDRokNO4fv36Yd26ddi+fTs++eQT2Gw2jBo1CteuXXNhtg2ZTCakp6dj9+7dWLlyJQoLCzFmzBhYrdZG4zvCmGzbtg0lJSV4/vnnnca463jUV9evSvq8Nc81NVRWVmLhwoVISEhAQECA0zil+6grTJgwAX//+9+xb98+fPDBBzh48CAmTpzo9AVLRxmTDRs2wN/fH88++2yTce44JuQ+PHW+b09tcYxYuHAhwsLCGrzI9USt6a/Dhw9j7dq1WLNmjStSdDut6bNLly5h69atqK2txa5du5CSkoI//vGPeO+991yRsupa02e//OUv8e677+Kpp56Cr68voqOj8aMf/eih+rd2pZwd/y0WCyoqKtrscXzabEvk1pKSknD27NlmP28ZExODmJgY++1Ro0ZhwIABWL16NRYtWtTeaTo1ceJE+/UhQ4bAZDIhIiICW7Zs6bBnlteuXYuJEyciLCzMaYy7jsfDorq6GtOmTYOIYOXKlU3GuuM+OmPGDPv1wYMHY8iQIYiOjkZWVhbGjRunSk5tYd26dZg5c2azX47ojmNC9DBLS0tDRkYGsrKy3OLLZt2N1WpFYmIi1qxZg6CgILXT6TBsNhuCg4Px17/+Fd7e3hgxYgSuX7+OpUuXIjU1Ve303FJWVhbef/99/OUvf4HJZEJBQQFeeeUVLFq0CCkpKWqn91BjcV5PUFAQvL29UVxc7LC8uLgYISEhja4TEhKiKN7V5s6dix07duDQoUPo2bOnonV9fX0xfPhwFBQUtFN2rWMwGNC3b1+nebn7mFy5cgV79+7FZ599pmg9dx2Pun4tLi5GaGiofXlxcTGGDRvW6Dqtea65Ul1hfuXKFezfv7/Jd80b09w+qoaoqCgEBQWhoKCg0eLc3ccEAP71r38hPz8fmzdvVryuO44JqccT5/v29n2OEcuWLUNaWhr27t2LIUOGtGeabkNpf128eBGXL1/GM888Y19ms9kAAD4+PsjPz0d0dHT7Jq2y1uxjoaGh8PX1hbe3t33ZgAEDYDabUVVVBa1W2645q601fZaSkoLExES89NJLAO6dwC8rK8OcOXPw5ptvwsuL/1xdn7Pjf0BAAPz8/Nrscdjz9Wi1WowYMQL79u2zL7PZbNi3b5/DO5gPiomJcYgHgMzMTKfxriIimDt3Lj7//HPs378fkZGRirdRW1uLM2fOOBRc7uDOnTu4ePGi07zcdUzqrF+/HsHBwZg8ebKi9dx1PCIjIxESEuLQ5xaLBTk5OU77vDXPNVepK8wvXLiAvXv3onv37oq30dw+qoZr167h5s2bTnNy5zGps3btWowYMQJDhw5VvK47jgmpx5Pme1dp7TFiyZIlWLRoEXbv3o0nnnjCFam6BaX91b9/f5w5cwanTp2yX6ZMmWL/dujw8HBXpq+K1uxjo0ePRkFBgf1EBgCcP38eoaGhHl+YA63rs/Ly8gYFeN3JjXvfj0b1uez436ZfL+chMjIyRKfTSXp6upw7d07mzJkjBoNBzGaziIgkJiZKcnKyPf7IkSPi4+Mjy5Ytk7y8PElNTRVfX185c+aMWk0QEZGXX35ZAgMDJSsrS4qKiuyX8vJye0z9trzzzjuyZ88euXjxouTm5sqMGTNEr9fLV199pUYT7F577TXJysqSwsJCOXLkiMTGxkpQUJDcuHFDRDrOmIjc+wbNXr16ycKFCxvc587jYbVa5eTJk3Ly5EkBIB9++KGcPHnS/g3maWlpYjAYZPv27XL69GmZOnWqREZGSkVFhX0bTz/9tCxfvtx+u7nnmhptqaqqkilTpkjPnj3l1KlTDs+du3fvOm1Lc/uoq9thtVplwYIFkp2dLYWFhbJ37155/PHHpU+fPlJZWem0He44JnVKS0ulU6dOsnLlyka34Q5jQh2Lp8z3rqS0z9LS0kSr1crWrVsdjqdWq1WtJriU0v6q72H8tnalfXb16lXx9/eXuXPnSn5+vuzYsUOCg4PlvffeU6sJLqe0z1JTU8Xf3182bdokly5dkn/+858SHR0t06ZNU6sJLtfc647k5GRJTEy0x1+6dEk6deokv/vd7yQvL09WrFgh3t7esnv37jbNi8W5E8uXL5devXqJVquVkSNHyrFjx+z3jR07VmbNmuUQv2XLFunbt69otVoZOHCg7Ny508UZNwSg0cv69evtMfXbMm/ePHu7jUajTJo0SU6cOOH65OuZPn26hIaGilarlR49esj06dOloKDAfn9HGRMRkT179ggAyc/Pb3CfO4/HgQMHGt2f6vK12WySkpIiRqNRdDqdjBs3rkEbIyIiJDU11WFZU881NdpSWFjo9Llz4MABp21pbh91dTvKy8tl/Pjx8sgjj4ivr69ERETIr371qwZFdkcYkzqrV68WPz8/KSkpaXQb7jAm1PF4wnzvakr6LCIiotHndv3jjidTuo896GEszkWU99nRo0fFZDKJTqeTqKgo+cMf/iA1NTUuzlpdSvqsurpa3n77bYmOjha9Xi/h4eHym9/8Rm7fvu36xFXS3OuOWbNmydixYxusM2zYMNFqtRIVFeVQU7UVjQj/d4GIiIiIiIhITfzMOREREREREZHKWJwTERERERERqYzFOREREREREZHKWJwTERERERERqYzFOREREREREZHKWJwTERERERERqYzFOREREREREZHKWJwTERERERERqYzFOREREREREZHKWJwTERERERERqYzFOREREREREZHKWJwTERERERERqez/AF3T1S80RJxoAAAAAElFTkSuQmCC", + "image/png": "iVBORw0KGgoAAAANSUhEUgAAA+8AAAF2CAYAAAAfjKmHAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjguMiwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8g+/7EAAAACXBIWXMAAA9hAAAPYQGoP6dpAACRjUlEQVR4nOzdeVgVZf/H8fcBBEQEFxREUVDJXVAEBLWVpLJf8ZSmtmg+PtmippGVlku75dJi+mRWLi2mWWZmRpntiZgg7pgboiK4IKAo2znz+4M8xSOaGDosn9d1zUXMfGfmO9PlGb7nnvu+LYZhGIiIiIiIiIhIpeVgdgIiIiIiIiIicn4q3kVEREREREQqORXvIiIiIiIiIpWcincRERERERGRSk7Fu4iIiIiIiEglp+JdREREREREpJJT8S4iIiIiIiJSyal4FxEREREREankVLyLiIiIiIiIVHIq3kVEREREqpl7770Xf3//i97X3d39b+P8/f25+eabL+ocIlJ+Kt5FREREREREKjknsxMQEREREZGK9fbbb2Oz2cxOQ0QqkIp3EREREZFqplatWmanUCHy8/NxdnbGwUEvDIvoX4GIlOnpp5/GYrHw+++/c/fdd+Pp6UmjRo2YMGEChmGwf/9+br31Vjw8PPDx8WH69Oml9n/jjTfo0KEDbm5u1K9fn27durFw4cJSMQcPHuTf//433t7euLi40KFDB+bOnXs5L1NERKRKOnHiBKNHj8bf3x8XFxcaN27M9ddfT1JSEnB2n/fU1FQsFgvTpk1jzpw5tGrVChcXF0JDQ/ntt9/+9nzJyck0atSIq6++mpMnT5ba9ssvvxAWFoarqystW7bkvffeO2v/PXv20K9fPxo0aICbmxvdu3fnyy+/LBXzww8/YLFYWLRoEePHj6dp06a4ubmRm5tr74d/8OBBYmJicHd3p1GjRowZMwar1XoRd1Ck6lHLu4icV//+/WnXrh0vvfQSX375Jc8//zwNGjTgrbfe4tprr+Xll1/mww8/ZMyYMYSGhnLllVfy9ttv8/DDD9O3b19GjRpFfn4+mzZtIiEhgTvvvBOAzMxMunfvjsViYcSIETRq1IivvvqKoUOHkpuby+jRo829cBERkUrsgQce4JNPPmHEiBG0b9+eY8eO8csvv7B9+3a6du16zv0WLlzIiRMnuP/++7FYLEyZMoXbbruNPXv2nLO1/rfffiM6Oppu3brx+eefU7t2bfu2Xbt20bdvX4YOHcrgwYOZO3cu9957LyEhIXTo0AEoeeZHRkZy6tQpHn74YRo2bMiCBQu45ZZb+OSTT/jXv/5V6nzPPfcczs7OjBkzhoKCApydnQGwWq1ER0cTHh7OtGnT+Pbbb5k+fTqtWrXiwQcf/Ke3VKTyM0REyjBp0iQDMIYNG2ZfV1xcbDRr1sywWCzGSy+9ZF9//Phxo3bt2sbgwYMNwzCMW2+91ejQocN5jz906FCjSZMmxtGjR0utHzBggOHp6WmcOnWq4i5GRESkmvH09DSGDx9+zu2DBw82WrRoYf997969BmA0bNjQyMrKsq///PPPDcD44osvSu1bp04dwzAM45dffjE8PDyMPn36GPn5+aXO0aJFCwMwfvrpJ/u6w4cPGy4uLsajjz5qXzd69GgDMH7++Wf7uhMnThgBAQGGv7+/YbVaDcMwjO+//94AjJYtW571d8DgwYMNwHj22WdLre/SpYsREhJyzvsgUp3otXkROa///Oc/9v92dHSkW7duGIbB0KFD7evr1atHmzZt2LNnj/33AwcOnPM1PMMw+PTTT/m///s/DMPg6NGj9iU6OpqcnBz7a38iIiJytnr16pGQkEB6enq59uvfvz/169e3/96rVy8A+zP8r77//nuio6O57rrrWLp0KS4uLmfFtG/f3n4MgEaNGpX6mwBg5cqVhIWF0bNnT/s6d3d3hg0bRmpqKtu2bSt1zMGDB5dq3f+rBx54oNTvvXr1KjN3kepIxbuInFfz5s1L/e7p6YmrqyteXl5nrT9+/DgATzzxBO7u7oSFhREYGMjw4cP59ddf7bFHjhwhOzubOXPm0KhRo1LLkCFDADh8+PAlvjIREZGqa8qUKWzZsgU/Pz/CwsJ4+umnL6iI/d/n+plC/swz/Iz8/Hz69OlDly5d+Pjjj+2vrv/d8c4c86/H27dvH23atDkrrl27dvbtfxUQEFDmuVxdXWnUqNF5zyVSnal4F5HzcnR0vKB1UNKiDiUP4x07drBo0SJ69uzJp59+Ss+ePZk0aRKAfeqau+++m1WrVpW59OjR4xJdkYiISNV3xx13sGfPHt544w18fX2ZOnUqHTp04Kuvvjrvfn/3DD/DxcWFPn36kJCQQFxc3D8+Xnmcq9X9XOcSqSk0YJ2IXBJ16tShf//+9O/fn8LCQm677TZeeOEFxo0bR6NGjahbty5Wq5WoqCizUxUREamSmjRpwkMPPcRDDz3E4cOH6dq1Ky+88AI33njjPz62xWLhww8/5NZbb6Vfv3589dVXXH311Rd1rBYtWrBjx46z1qekpNi3i8jfU8u7iFS4Y8eOlfrd2dmZ9u3bYxgGRUVFODo6cvvtt/Ppp5+yZcuWs/Y/cuTI5UpVRESkyrFareTk5JRa17hxY3x9fSkoKKiw8zg7O7N06VJCQ0P5v//7P9atW3dRx7nppptYt24d8fHx9nV5eXnMmTMHf39/2rdvX1Epi1RrankXkQrXu3dvfHx86NGjB97e3mzfvp2ZM2fSp08f6tatC8BLL73E999/T3h4OPfddx/t27cnKyuLpKQkvv32W7Kysky+ChERkcrpxIkTNGvWjL59+xIUFIS7uzvffvstv/32G9OnT6/Qc9WuXZsVK1Zw7bXXcuONN/Ljjz/SsWPHch1j7NixfPTRR9x44408/PDDNGjQgAULFrB3714+/fRTHBzUnihyIVS8i0iFu//++/nwww955ZVXOHnyJM2aNePhhx9m/Pjx9hhvb2/WrVvHs88+y9KlS/nvf/9Lw4YN6dChAy+//LKJ2YuIiFRubm5uPPTQQ3zzzTcsXboUm81G69at+e9//3tJ5jv38PDg66+/5sorr+T666/n559/pnXr1he8v7e3N2vWrOGJJ57gjTfeID8/n86dO/PFF1/Qp0+fCs9XpLqyGP9kNAkRERERERERueT0joqIiIiIiIhIJafiXURERERERKSSU/EuIiIiIiIiUsmpeBcRERERERGp5FS8i4iIiIiIiFRyKt5FREREREREKjnN8/4XNpuN9PR06tati8ViMTsdERERDMPgxIkT+Pr64uCg79z/KT3rRUSksrnQZ72K979IT0/Hz8/P7DRERETOsn//fpo1a2Z2GlWenvUiIlJZ/d2zXsX7X9StWxcouWkeHh4mZyMiIgK5ubn4+fnZn1GXy6xZs5g6dSoZGRkEBQXxxhtvEBYWds74JUuWMGHCBFJTUwkMDOTll1/mpptusm83DINJkybx9ttvk52dTY8ePXjzzTcJDAwEIDU1leeee47vvvuOjIwMfH19ufvuu3nqqadwdna2xwQEBJx17vj4eLp3735B16VnvYiIVDYX+qxX8f4XZ16f8/Dw0ANdREQqlcv5ivfixYuJjY1l9uzZhIeH89prrxEdHc2OHTto3LjxWfFr1qxh4MCBTJ48mZtvvpmFCxcSExNDUlISHTt2BGDKlCnMmDGDBQsWEBAQwIQJE4iOjmbbtm24urqSkpKCzWbjrbfeonXr1mzZsoX77ruPvLw8pk2bVup83377LR06dLD/3rBhwwu+Nj3rRUSksvq7Z73FMAzjMuVS6eXm5uLp6UlOTo4e6CIiUimY8WwKDw8nNDSUmTNnAiX9xP38/Bg5ciRjx449K75///7k5eWxYsUK+7ru3bsTHBzM7NmzMQwDX19fHn30UcaMGQNATk4O3t7ezJ8/nwEDBpSZx9SpU3nzzTfZs2cP8GfL+4YNGwgODr6oa9OzXkREKpsLfTZp5BsRERGxKywsJDExkaioKPs6BwcHoqKiiI+PL3Of+Pj4UvEA0dHR9vi9e/eSkZFRKsbT05Pw8PBzHhNKCvwGDRqctf6WW26hcePG9OzZk+XLl5/3egoKCsjNzS21iIiIVEUq3kVERMTu6NGjWK1WvL29S6339vYmIyOjzH0yMjLOG3/mZ3mOuWvXLt544w3uv/9++zp3d3emT5/OkiVL+PLLL+nZsycxMTHnLeAnT56Mp6enfdFgdSIiUlWpz7uIiIhUKgcPHuSGG26gX79+3Hffffb1Xl5exMbG2n8PDQ0lPT2dqVOncsstt5R5rHHjxpXa58ygQCIiIlWNWt5FRETEzsvLC0dHRzIzM0utz8zMxMfHp8x9fHx8zht/5ueFHDM9PZ1rrrmGyMhI5syZ87f5hoeHs2vXrnNud3FxsQ9Op0HqRESkKlPxLiIiInbOzs6EhISwevVq+zqbzcbq1auJiIgoc5+IiIhS8QCrVq2yxwcEBODj41MqJjc3l4SEhFLHPHjwIFdffTUhISHMmzcPB4e//zMlOTmZJk2alOsaRUREqqKLKt5nzZqFv78/rq6uhIeHs27duvPGL1myhLZt2+Lq6kqnTp1YuXKlfVtRURFPPPEEnTp1ok6dOvj6+jJo0CDS09NLHSMrK4u77roLDw8P6tWrx9ChQzl58mSpmE2bNtGrVy9cXV3x8/NjypQpF3N5IiIiNVpsbCxvv/02CxYsYPv27Tz44IPk5eUxZMgQAAYNGsS4cePs8aNGjSIuLo7p06eTkpLC008/zfr16xkxYgRQMvXN6NGjef7551m+fDmbN29m0KBB+Pr6EhMTA/xZuDdv3pxp06Zx5MgRMjIySvWJX7BgAR999BEpKSmkpKTw4osvMnfuXEaOHHn5bo6IiIhZjHJatGiR4ezsbMydO9fYunWrcd999xn16tUzMjMzy4z/9ddfDUdHR2PKlCnGtm3bjPHjxxu1atUyNm/ebBiGYWRnZxtRUVHG4sWLjZSUFCM+Pt4ICwszQkJCSh3nhhtuMIKCgoy1a9caP//8s9G6dWtj4MCB9u05OTmGt7e3cddddxlbtmwxPvroI6N27drGW2+9dcHXlpOTYwBGTk5OeW+LiIjIJWHWs+mNN94wmjdvbjg7OxthYWHG2rVr7duuuuoqY/DgwaXiP/74Y+OKK64wnJ2djQ4dOhhffvllqe02m82YMGGC4e3tbbi4uBjXXXedsWPHDvv2efPmGUCZyxnz58832rVrZ7i5uRkeHh5GWFiYsWTJknJdl571IiJS2Vzos6nc87xX9NyvZfntt98ICwtj3759NG/enO3bt9O+fXt+++03unXrBkBcXBw33XQTBw4cwNfXlzfffJOnnnqKjIwMnJ2dARg7dizLli0jJSXlgq5Nc7+KiEhlo2dTxdL9FBGRyuaSzPN+KeZ+LUtOTg4Wi4V69erZj1GvXj174Q4QFRWFg4MDCQkJ9pgrr7zSXrifOc+OHTs4fvx4mefR3K8iInIpFBRbzU5BREREqplyTRV3vrlfz9W6/Xdzv/6v/Px8nnjiCQYOHGj/1iEjI4PGjRuXTtzJiQYNGpSaQzYgIOCs85zZVr9+/bPONXnyZJ555plzXa6IiEi5/bzzCI9/somJN7fnxk4aSE1E5J/IzS9if9Yp9medIi3rFPuzTrP/+CmaeLpyV3gLOjb1NDtFkcumUs3zXlRUxB133IFhGLz55puX/Hya+1VERCpSYbGNScu3cignn4S9WSreRUT+RpHVxqHs/JLC/HhJgZ72l2I9+1TROff9aN1+wvwbcG8Pf3q398bJURNpSfVWruL9Usz9esaZwn3fvn189913pd719/Hx4fDhw6Xii4uLycrKKjWHbFnnObOtLC4uLri4uJzrckVERMrl3V/2sudIHl7uLjxy/RVmpyMiUink5hex90jeWYX5/uOnSM/Ox2o7/xBcDes449fAjeYN3PBrUBvferVJ2JPFys2HWJeaxbrULHw9Xbknwp8BoX7Ur+N83uOJVFXlKt7/Ovfrmaldzsz9emY6mP91Zu7X0aNH29f9de5X+LNw37lzJ99//z0NGzY86xjZ2dkkJiYSEhICwHfffYfNZiM8PNwe89RTT1FUVEStWrXs52nTpk2Zr8yLiIhUpPTs08xYvROAcTe2xbN2LZMzEhExl2EYzF+TyuSvUigstp0zzsXJ4c/ivH5t+383b+iGX3036ricXbLcFd6Cp/q044O1+1iYkEZ6Tj4vx6Xw+urf+VeXpgyO9KetjwallOql3KPNL168mMGDB/PWW28RFhbGa6+9xscff0xKSgre3t4MGjSIpk2bMnnyZADWrFnDVVddxUsvvUSfPn1YtGgRL774IklJSXTs2JGioiL69u1LUlISK1asKNU/vkGDBvYB6G688UYyMzOZPXs2RUVFDBkyhG7durFw4UKgZJC7Nm3a0Lt3b5544gm2bNnCv//9b1599VWGDRt2QdemEWhFRORiDf8wiS83HyLUvz4f3x+BxWKpkOPq2VSxdD9FLo+8gmLGLt3MFxvTAWhc14UWDd3+LMwb/PnfjdxdcHC4+M/M/CIrX2xMZ96vqWw79OcA1JGtGnJvpD/XtfPG8R8cX+RSu9BnU7n7vPfv358jR44wceJEMjIyCA4OJi4uzl50p6Wl4eDwZ3+TyMhIFi5cyPjx43nyyScJDAxk2bJldOzYEYCDBw+yfPlyAIKDg0ud6/vvv+fqq68G4MMPP2TEiBFcd911ODg4cPvttzNjxgx7rKenJ9988w3Dhw8nJCQELy8vJk6ceMGFu4iIyMX6eecRvtx8CEcHC8/e2rHCCncRkapo1+GTPPhBIjsPn8TJwcKTN7VjSA//S/bZ6FrLkX7d/Ogb0ozfUo8zf81e4rZksGb3MdbsPoZfg9oMjvCnXzc/vRUlVVq5W96rM30bLyIi5VVYbOOG139iz5E87o305+lbOlTo8fVsqli6nyKX1pebDvH4JxvJK7Ti7eHCrDu70s2/wWXP42D2ad6P38ei39Lsg965OTtye9dmDI70p3Vj98uek8i5XLKWdxEREfmTBqkTESkZNf6lr1J495e9AHRv2YA3BnalUV1zBoduWq82Y29sy6jrAlmWfJD5v6ayI/ME76/dx/tr99Er0It/9wjgqisa/aNX9kUuJxXvIiIiF+mvg9Q9eZMGqRORmikzN58RC5P4LfU4AA9c1Yoxva+oFFO31XZ2ZGBYcwaE+hG/5xjzfk3l2+2Z/LzzKD/vPEqAVx0GR7Tg9pBm1HXVZ7hUbireRURELtLzX27jdJGVUP/6/KtLU7PTERG57NbuOcaIhRs4erKAui5OTLsjiOgOZU/TbCaLxUJkKy8iW3mRduwU78Wnsnj9fvYezePpL7bx2uqdvHlXCBGtGv79wURMYv7XYSIiIlXQzzuPsHJzhgapE5EayTAM3vpxN3e9k8DRkwW09anL8pE9K2Xh/r+aN3Rj/M3tWTvuOp6L6UhLrzpknypi8Nx1LP9jdHyRykjFu4iISDkVFtuYtHwrAPd0b0G7Jhr4TERqjtz8Ih74IJHJX6VgtRnc1qUpnz3UgwCvOmanVi51XJy4p3sLVo7qxY0dfSi02nj4ow3M+Wk3GtNbKiMV7yIiIuWkQepEpKbakXGCW2f+ytdbM3F2dOD5mI5MvyOI2s6OZqd20VxrOTLzzq4M6eEPwIsrU3jmi21YbSrgpXJRn3cREZFy0CB1IlJTLdtwkHFLN3O6yErTerWZdVdXgv3qmZ1WhXB0sDDp/zrQtF5tnv9yO/PXpHIo5zSvD+iCa62q+8WEVC9qeRcRESkHDVInIjVNQbGVCcu2MHpxMqeLrPQK9OKLkT2rTeH+V//p1ZKZd3bB2dGBr7dmcufba8nKKzQ7LRFAxbuIiMgF0yB1IlLTpGefpv9ba3l/7T4AHr4ukPlDwmhQx9nkzC6dmzv78v7QMDxcnUhKy6bvm2tIO3bK7LREVLyLiIhciIJiK5M+LxmkblCEBqkTkerv551H6DPjZ5L3Z+NZuxbz7g0l9vorcHSo/l9chrdsyKcPRtK0Xm32HM3jtjd/ZdOBbLPTkhpOxbuIiMgFePeXvew5qkHqRKT6s9kMZn63k0Fz13H8VBEdm3qwYmRPrmnb2OzULqtA77osfSiS9k08OHqykP5vreX7lMNmpyU1mIp3ERGRv5GefZo3Vu8CSgap83DVIHUiUj3l5hdx33vrmfbN7xgGDAzz45MHIvFr4GZ2aqbw9nDl4wci6BXoxekiK/95bz0frUszOy2poVS8i4iI/A0NUiciNYFhGMQu3sjqlMO4ODkwpW9nJt/WucaPtu7u4sTce0PpG9IMq81g3NLNvPLNDs0FL5edincREZHz0CB1IlJTLEk8wLfbS+ZvXzSsO3d08zM7pUqjlqMDU/t25uHrAgGY8d0uxizZRJHVZnJmUpOoeBcRETkHDVInIjXF/qxTPPvFNgBie19Bl+b1Tc6o8rFYLMRefwUv394JRwcLnyYd4N/zf+NEfpHZqUkNoeJdRETkHDRInYjUBDabwZglGzlZUEy3FvW5r1dLs1Oq1PqHNuedwd1wc3bk551H6f/WWjJz881OS2oAFe8iIiJl0CB1IlJTzP11Lwl7s3BzdmT6HUE1Yiq4f+qaNo1ZPCwCL3cXth3K5bb/rmFn5gmz05JqTsW7iIhIGTRInYjUBL9nnmDK1zsAmHBze1o0rGNyRlVHp2aefPZQJC0b1eFg9mluf3MNCXuOmZ2WVGMq3kVERP6HBqkTkZqgsNjGI4uTKSy2cU2bRgwI1QB15eXXwI1PH4gkpEV9cvOLuefddazYlG52WlJNqXgXERH5Cw1SJyI1xRvf7WRrei713Grx8u2d9UXlRapfx5kP/xPODR18KLTaGLFwA+/8vEdTyUmFU/EuIiLyFxqkTkRqgqS048z6vmRcjxdiOtHYw9XkjKo211qOzLqrK/dG+gPw/Jfbmf3jHnOTkmpHxbuIiMgfDmqQOhGpAU4XWnn0443YDIgJ9qVP5yZmp1QtODpYmPR/7Xn8hjYATPk6ha+3ZpiclVQnKt5FRET+8IIGqRORGuClr7az92gePh6uPHNLR7PTqVYsFgsPXd2aQREtMAwYvSiZLQdzzE5LqgkV7yIiImiQOhGpGX7eeYQF8fsAmNqvM55uesPoUph4c3t6BXpxusjKfxas1zzwUiEuqnifNWsW/v7+uLq6Eh4ezrp1684bv2TJEtq2bYurqyudOnVi5cqVpbYvXbqU3r1707BhQywWC8nJyaW2p6amYrFYylyWLFlijytr+6JFiy7mEkVEpAbRIHUiUhPknCrisSWbgJLPul6BjUzOqPpycnRg5p1dadWoDhm5+dz33npOF1rNTkuquHIX74sXLyY2NpZJkyaRlJREUFAQ0dHRHD58uMz4NWvWMHDgQIYOHcqGDRuIiYkhJiaGLVu22GPy8vLo2bMnL7/8cpnH8PPz49ChQ6WWZ555Bnd3d2688cZSsfPmzSsVFxMTU95LFBGRGkaD1IlITTBp+RYycvNp6VWHcTe2Mzudas+zdi3m3htKfbdabDqQw5glG7HZNAK9XDyLUc45DMLDwwkNDWXmzJkA2Gw2/Pz8GDlyJGPHjj0rvn///uTl5bFixQr7uu7duxMcHMzs2bNLxaamphIQEMCGDRsIDg4+bx5dunSha9euvPvuu39ejMXCZ599dtEFe25uLp6enuTk5ODhoVYXEZGa4GD2aaKm/8jpIiuv3BHEbV2bmZ1SKXo2VSzdT6mpvtx0iOELk3CwwKcPRtKleX2zU6oxEvYc4+53EyiyGjx8bWtie7cxOyWpZC702VSulvfCwkISExOJior68wAODkRFRREfH1/mPvHx8aXiAaKjo88ZfyESExNJTk5m6NChZ20bPnw4Xl5ehIWFMXfu3PPOr1hQUEBubm6pRUREapbnvtAgdSJSvR3OzeepZZsBGH5NaxXul1l4y4a8+K9OAMz4bhfLNhw0OSOpqspVvB89ehSr1Yq3t3ep9d7e3mRklD0NQkZGRrniL8S7775Lu3btiIyMLLX+2Wef5eOPP2bVqlXcfvvtPPTQQ7zxxhvnPM7kyZPx9PS0L35+fhedk4iIVD3fbM0gbqsGqROR6sswDJ74dBPZp4ro2NSDkdcGmp1SjdSvmx/3X9USgMc/3UTivuMmZyRVUZUbbf706dMsXLiwzFb3CRMm0KNHD7p06cITTzzB448/ztSpU895rHHjxpGTk2Nf9u/ffylTFxGRSuREfhET/xikbtiVLTVInYhUS4t+28/3O47g7OTAK3cE4+xU5f78rzaeiG7L9e29KSy2cf/76zlw/JTZKUkVU65/vV5eXjg6OpKZmVlqfWZmJj4+PmXu4+PjU674v/PJJ59w6tQpBg0a9Lex4eHhHDhwgIKCgjK3u7i44OHhUWoREZGaYUrcDjJy8/Fv6Mao69QSJSLVT9qxUzy3YhsAj0e34QrvuiZnVLM5OFh4rX8w7Zt4cPRkIUPnr+dkQbHZaUkVUq7i3dnZmZCQEFavXm1fZ7PZWL16NREREWXuExERUSoeYNWqVeeM/zvvvvsut9xyC40a/f3UFsnJydSvXx8XF5eLOpeIiFRP61OzeH9tyTzHL/6rE661HE3OSESkYlltBrEfJ3Oq0Ep4QAP+3SPA7JQEqOPixLv3dqNRXRd2ZJ7g4Y82YNUI9HKBnMq7Q2xsLIMHD6Zbt26EhYXx2muvkZeXx5AhQwAYNGgQTZs2ZfLkyQCMGjWKq666iunTp9OnTx8WLVrE+vXrmTNnjv2YWVlZpKWlkZ6eDsCOHTuAklb7v7bQ79q1i59++umseeIBvvjiCzIzM+nevTuurq6sWrWKF198kTFjxpT3EkVEpBorKLYydmnJwE39QpoR2drL5IxERCre2z/vYf2+47i7ODGtXxAODhrTo7Jo4lmbdwZ144634vku5TAvrtzOhJvbm52WVAHl7vTSv39/pk2bxsSJEwkODiY5OZm4uDj7oHRpaWkcOnTIHh8ZGcnChQuZM2cOQUFBfPLJJyxbtoyOHTvaY5YvX06XLl3o06cPAAMGDKBLly5nTSU3d+5cmjVrRu/evc/Kq1atWsyaNYuIiAiCg4N56623eOWVV5g0aVJ5L1FERKqxN3/Yza7DJ/Fyd+apPprnWESqn+2Hcnnlm98BmPh/7fFr4GZyRvK/gvzqMf2OIADe/WUvH61LMzkjqQrKPc97daa5X0VEqrddh09w0+u/UGi18cbALvxfkK/ZKf0tPZsqlu6nVHcFxVZunfkrKRkniGrnzduDQjSTRiU2Y/VOXln1O04OFt4bGkZkK70NVhNdknneRUREqiqbzWDsp5sptNq4tm1jbu7cxOyUREQq3Gvf7iQl4wQN6zgz+bZOKtwruZHXtubWYF+KbQYPfpDEniMnzU5JKjEV7yIiUiMsXJfG+n3HqePsyHMxmtNdRKqf9alZvPXjbgBe+FcnGtXVoM2VncVi4eXbO9OleT1yThfxnwXryT5VaHZaUkmpeBcRkWovIyefl79KAWBMdBua1qttckYiIhUrr6CYR5dsxGbA7V2bcUPHi5uWWS4/11qOzLmnG03r1WbP0Twe+jCJIqvN7LSkElLxLiIi1d6k5Vs4UVBMsF89BkX4m52OiEiFe3HldvYdO4WvpyuTbtHI5VVNo7ouvDO4G3WcHVmz+xgTP9+KhiaT/6XiXUREqrW4LYf4emsmTg4WXrq9E46aLumCzJo1C39/f1xdXQkPD2fdunXnjV+yZAlt27bF1dWVTp06nTWtq2EYTJw4kSZNmlC7dm2ioqLYuXOnfXtqaipDhw4lICCA2rVr06pVKyZNmkRhYenXRzdt2kSvXr1wdXXFz8+PKVOmVNxFi1RR3+84zIcJJaOVT+sXhIdrLZMzkovRrokHMwZ2wWKBj9alMffXVLNTkkpGxbuIiFRbOaeLmPj5VgAeuKoVbX00uviFWLx4MbGxsUyaNImkpCSCgoKIjo7m8OHDZcavWbOGgQMHMnToUDZs2EBMTAwxMTFs2bLFHjNlyhRmzJjB7NmzSUhIoE6dOkRHR5Ofnw9ASkoKNpuNt956i61bt/Lqq68ye/ZsnnzySfsxcnNz6d27Ny1atCAxMZGpU6fy9NNPM2fOnEt7Q0QqseN5hTzxySYA/t0jgMjWGq28KruunTdP3VQyjekLX27ju5RMkzOSykRTxf2Fpo8REalenvxsMwsT0mjpVYeVo3rhWsvR7JTKzYxnU3h4OKGhocycORMAm82Gn58fI0eOZOzYsWfF9+/fn7y8PFasWGFf1717d4KDg5k9ezaGYeDr68ujjz7KmDFjAMjJycHb25v58+czYMCAMvOYOnUqb775Jnv27AHgzTff5KmnniIjIwNnZ2cAxo4dy7Jly0hJSbmga9OzXqqb2I+TWZp0kFaN6vDlw1Xzc05KMwyDJz/bzEfr9lPH2ZFPH4rUl8/VnKaKExGRGm3d3iwW/vEa6Yu3ddIftBeosLCQxMREoqKi7OscHByIiooiPj6+zH3i4+NLxQNER0fb4/fu3UtGRkapGE9PT8LDw895TCgp8Bs0aFDqPFdeeaW9cD9znh07dnD8+PEyj1FQUEBubm6pRaS62HX4BJ9tOAjA9DuC9TlXTVgsFp69tSMRLRuSV2hl6Pz1HD1ZYHZaUgmoeBcRkWqnoNjKuKUlr5EOCPWje8uGJmdUdRw9ehSr1Yq3t3ep9d7e3mRkZJS5T0ZGxnnjz/wszzF37drFG2+8wf333/+35/nrOf7X5MmT8fT0tC9+fn5lxolURW98twvDgN7tvQn2q2d2OlKBajk68ObdXQnwqsPB7NMMe289+UVWs9MSk6l4FxGRamfW97vZfSQPL3cXxt3Yzux0pJwOHjzIDTfcQL9+/bjvvvv+0bHGjRtHTk6Ofdm/f38FZSlirl2HT7J8YzoAD18XaHI2cinUc3Pm3cHd8HB1IiktmwnLtvz9TlKtqXgXEZFq5ffME7z5wy4AnrmlA55uGnW5PLy8vHB0dCQzs/QgSZmZmfj4lD1vtI+Pz3njz/y8kGOmp6dzzTXXEBkZedZAdOc6z1/P8b9cXFzw8PAotYhUBzO/24lhwPXtvenY1NPsdOQSadnInTfvDsHBAksSD/BJ4gGzUxITqXgXEZFqw2YzGLd0M0VWg6h2jbmpU9kFnZybs7MzISEhrF692r7OZrOxevVqIiIiytwnIiKiVDzAqlWr7PEBAQH4+PiUisnNzSUhIaHUMQ8ePMjVV19NSEgI8+bNw8Gh9J8pERER/PTTTxQVFZU6T5s2bahfv/7FX7RIFbPnyJ+t7qPU6l7t9WjtxSNRVwAwYdkWdmaeMDkjMYuKdxERqTY+TNhH4r7j1HF25NlbO2KxaE73ixEbG8vbb7/NggUL2L59Ow8++CB5eXkMGTIEgEGDBjFu3Dh7/KhRo4iLi2P69OmkpKTw9NNPs379ekaMGAGUDL40evRonn/+eZYvX87mzZsZNGgQvr6+xMTEAH8W7s2bN2fatGkcOXKEjIyMUn3Z77zzTpydnRk6dChbt25l8eLFvP7668TGxl6+myNSCcz8bhc2A6LaNVarew3x0DWt6RXoxekiKw99mMSpwmKzUxITOJmdgIiISEU4lHOal+N2APDEjW3xrVfb5Iyqrv79+3PkyBEmTpxIRkYGwcHBxMXF2QeHS0tLK9UqHhkZycKFCxk/fjxPPvkkgYGBLFu2jI4dO9pjHn/8cfLy8hg2bBjZ2dn07NmTuLg4XF1dgZIW9F27drFr1y6aNWtWKp8zs9p6enryzTffMHz4cEJCQvDy8mLixIkMGzbsUt8SkUpj79E8liWXjDA/6rorTM5GLhdHBwuv9g/mptd/Zufhk0z8fCvT+gWZnZZcZprn/S8096uISNVkGAb3vZfIt9sz6dq8Hp88EImDQ/VoddezqWLpfkpVd2Ze9+vaNubde0PNTkcus7V7jnHn22uxGTC1b2f6ddMMGtWB5nkXEZEa46stGXy7PZNajhZeur1ztSncRUT+KvVoHp8n/9HXPUp93Wui7i0bEnv9H/3fP9/C7+r/XqOoeBcRkSot51QRk5ZvBeDBq1pxhXddkzMSEbk0Zn6/C6vN4Jo2jejcrJ7Z6YhJHrq6pP97fpGN4er/XqOoeBcRkSrtpbjtHDlRQMtGdXjomtZmpyMicknsO5bHZxv+6Osepb7uNZnDH/3fG9d1sfd/l5pBxbuIiFRZa/cc46N1+wF46bbOuNZyNDkjEZFLY9Yfre5XXdGIYL96ZqcjJvNyd2HGwC44WOCTxAMsWb/f7JTkMlDxLiIiVVJ+kZUnl24GYGBYc8ICGpickYjIpZF27BSfJp1pdVdfdymh/u81j4p3ERGpkmZ9v4s9R/NoXNeFsTe2NTsdEZFL5kyr+5VXNKJr8/pmpyOVyF/7v2v+9+pPxbuIiFQ5OzJO8OYPuwF49tYOeNauZXJGIiKXxv6sU3yadACAUdep1V1K+2v/912HTzJhmfq/V2cq3kVEpEqx2gye+HQTxTaD3u29uaFjE7NTEhG5ZP77wy6KbQa9Ar0IaaFWdznbX/u/f5qk/u/VmYp3ERGpUt6PTyV5fzZ1XZx49taOZqcjInLJ7M86xZL1anWXv6f+7zXDRRXvs2bNwt/fH1dXV8LDw1m3bt1545csWULbtm1xdXWlU6dOrFy5stT2pUuX0rt3bxo2bIjFYiE5OfmsY1x99dVYLJZSywMPPFAqJi0tjT59+uDm5kbjxo157LHHKC5Wvw8Rkepif9Yppn69A4DHb2yLj6eryRmJiFw6//1hN8U2g56tvejmr0E55fzU/736K3fxvnjxYmJjY5k0aRJJSUkEBQURHR3N4cOHy4xfs2YNAwcOZOjQoWzYsIGYmBhiYmLYsmWLPSYvL4+ePXvy8ssvn/fc9913H4cOHbIvU6ZMsW+zWq306dOHwsJC1qxZw4IFC5g/fz4TJ04s7yWKiEglZLMZjFmykbxCK2H+DbgrrLnZKYmIXDIHs0/zSWLJ688aYV4uxJn+794e6v9eXZW7eH/llVe47777GDJkCO3bt2f27Nm4ubkxd+7cMuNff/11brjhBh577DHatWvHc889R9euXZk5c6Y95p577mHixIlERUWd99xubm74+PjYFw8PD/u2b775hm3btvHBBx8QHBzMjTfeyHPPPcesWbMoLCws72WKiEglM29NKgl7s3BzdmRavyAcHCxmpyQicsn89/tdFFkNIls1JFSt7nKBvNxdmDFA/d+rq3IV74WFhSQmJpYqsh0cHIiKiiI+Pr7MfeLj488qyqOjo88Zfz4ffvghXl5edOzYkXHjxnHq1KlS5+nUqRPe3t6lzpObm8vWrfrWSUSkKtt1+CRT4lIAGN+nPc0bupmckYjIpZOefZqP/yi61Nddyiu8ZUMe7d0GKOn/viND/d+rC6fyBB89ehSr1VqqQAbw9vYmJSWlzH0yMjLKjM/IyChXonfeeSctWrTA19eXTZs28cQTT7Bjxw6WLl163vOc2VaWgoICCgoK7L/n5uaWKycREbn0iq02Hv04mYJiG1de0YiBYX5mpyQickn994eSVvfuLRsQ3rKh2elIFfTgVa1Yu+cYP+88ykMfJrJ8RE/quJSr9JNKqMr8Hxw2bJj9vzt16kSTJk247rrr2L17N61atbqoY06ePJlnnnmmolIUEZFLYPaPu9l4IAcPVyem3N4Zi0Wvy4tI9XUo5zQf/3ZmhPkrTM5Gqqoz/d/7zPiZ3UfymLBsC9PvCNIztIor12vzXl5eODo6kpmZWWp9ZmYmPj4+Ze7j4+NTrvgLFR4eDsCuXbvOe54z28oybtw4cnJy7Mv+/eoTIiJSmWxNz+H11TsBePbWjhpdXkSqvTd/2E2h1UZ4QAMiWqnVXS7eX/u/L91wkCWJB8xOSf6hchXvzs7OhISEsHr1avs6m83G6tWriYiIKHOfiIiIUvEAq1atOmf8hToznVyTJk3s59m8eXOpUe9XrVqFh4cH7du3L/MYLi4ueHh4lFpERKRyKCi28ujHGymyGtzQwYdbg33NTklE5JLKyMln0TqNMC8V56/93yeq/3uVV+7R5mNjY3n77bdZsGAB27dv58EHHyQvL48hQ4YAMGjQIMaNG2ePHzVqFHFxcUyfPp2UlBSefvpp1q9fz4gRI+wxWVlZJCcns23bNgB27NhBcnKyva/67t27ee6550hMTCQ1NZXly5czaNAgrrzySjp37gxA7969ad++Pffccw8bN27k66+/Zvz48QwfPhwXF5eLv0MiImKK177dSUrGCRrWceaFf3XUq34iUu3N/rGk1T3MvwER6usuFeTBq1r9Zf73RPIKNP97VVXu4r1///5MmzaNiRMnEhwcTHJyMnFxcfbB4dLS0jh06JA9PjIykoULFzJnzhyCgoL45JNPWLZsGR07drTHLF++nC5dutCnTx8ABgwYQJcuXZg9ezZQ0uL/7bff0rt3b9q2bcujjz7K7bffzhdffGE/hqOjIytWrMDR0ZGIiAjuvvtuBg0axLPPPntxd0ZEREyTuO84b/24G4AXb+tEQ3d9CSsi1Vtmbj4L16UBJa3u+sJSKspf538/0//dMAyz05KLYDH0f84uNzcXT09PcnJy9Aq9iIhJThdauWnGz+w9msdtXZrySv9gs1MylZ5NFUv3Uyqrp5dvZf6aVEL96/Px/REq3qXCJew5xsC312IzYErfztzRTbO3VBYX+mwqd8u7iIjIpfRyXAp7j+bh4+HKpFs6mJ2OiMgldzg3n4/OtLpfd4UKd7kk1P+96lPxLiIilcaaXUeZvyYVKGkV8Kxdy9yEREQug9k/7qGg2EZIi/r0aK2+7nLpPHhVK668opH6v1dRKt5FRKRSyM0v4rFPNgFwd/fmXHlFI5MzEhG59A7n5vNhwj4ARl2nvu5yaTk4WHj1jiB7//cnP9us/u9ViIp3ERGpFJ5fsY2D2adp3sCNcTe2MzsdEZHL4q2fSlrduzSvR69AL7PTkRqgobsLM+/siqODhc+T0+0DJUrlp+JdRERMt3p7Jh+vP4DFAtP6BVHHxcnslERELrkjJwrsre6jo9TXXS6fUP8GPB5d0v/9meXb2HIwx+SM5EKoeBcREVMdzytk7NLNANzXqyVhAQ1MzkhE5PKY89Nu8otsBPvV40q1ustlNuzKlkS1a0yh1cZDHyaRc7rI7JTkb6h4FxERU034fAtHThQQ2Nid2OuvMDsdEZHL4ujJAt5f+0dfd83rLiawWCxM7xdMs/q1Scs6xeOfbFT/90pOxbuIiJjmi43prNh0CEcHC6/cEYxrLUezUxIRuSzm/LSH/CIbQc08uVoDdIpJPN1qMevOrtRytPD11kzm/ppqdkpyHireRUTEFIdz85nw+RYARlzTmk7NPE3OSETk8jh6soD349XqLpVDkF89xvdpD8DkldtJ3Hfc5IzkXFS8i4jIZWcYBmOXbib7VBEdm3ow4trWZqckInLZvP3zHk4XWenczJNr2jQ2Ox0RBkW0oE/nJhTbDEYsTCIrr9DslKQMKt5FROSyW7L+AN+lHMbZ0YFX7gimlqMeRyJSM2TlFf7Z6q553aWSsFgsvHRbJwK86nAoJ5/Yj5Ox2dT/vbLRX0siInJZHTh+imdXbAPg0d5XcIV3XZMzEhG5fN7+eQ+nCq10aurJtW3V6i6VR13XWvz3rq64ODnww44jvPnjbrNTkv+h4l1ERC4bm83gsSWbOFlQTLcW9flPr5ZmpyQictnk5hfx3ppUAB5Wq7tUQu2aePDcrR0BmP7NDuJ3HzM5I/krFe8iInLZvBefSvyeY9Su5ci0fkE4OugPVxGpOT7+bT95hVau8HYnqp1a3aVy6tetGbd3bYbNgIcXbeDwiXyzU5I/qHgXEZHLYs+Rk7wUlwLAkze1xd+rjskZiYhcPlabwXt/9HW/NzJAre5SaVksFp6L6cAV3u4cOVHAqI+Ssar/e6Wg4l1ERC65YquNR5dsJL/IRq9AL+7u3sLslERELqvvUg6TlnUKz9q1+FeXpmanI3Jebs5O/Peurrg5OxK/5xivffu72SkJKt5FROQymPPzHjakZVPX1YmXb++sFicRqXHmr9kLwIAwP2o7O5qcjcjfa924LpNv6wTAG9/t4ocdh03OSFS8i4jIJbX9UC6vrir5xv7p/+uAb73aJmckInJ57cg4wa+7juFggXv05pFUIbcGN+Wu8OYAPLI4mfTs0yZnVLOpeBcRkUumsNhG7McbKbIaXN/em9u66lVREal55v8xwnx0Bx+a1XczNxmRcppwc3s6NvXg+KkiRn60gSKrzeyUaiwV7yIicslMiUth+6FcGtRx5sV/ddLr8iJS42SfKuSzDQcAuDfS39xkRC6Cay1H/ntnCHVdnUjcd5wpfww+K5efincREbkkvkvJ5J1fSvp4vnx7ZxrVdTE5IxGRy2/Rb/vJL7LRvokHYQENzE5H5KI0b+jG1L5BALz9816+2ZphckY1k4p3ERGpcBk5+YxZsgmAIT38ub69t8kZiYhcfsVWG+/98cr8vT389faRVGk3dPRhaM8AAB5dspG0Y6dMzqjmUfEuIiIVymozGL14A1l5hXTw9WDsjW3NTklExBSrtmWSnpNPgzrO3BLka3Y6Iv/Y2Bvb0rV5PU7kF/PQwkTyi6xmp1SjqHgXEZEKNev7Xazdk0UdZ0dm3tkVFydNiSQiNdO8P1rd7wxrjmstfRZK1VfL0YGZd3alvlstthzM5YUvt5udUo1yUcX7rFmz8Pf3x9XVlfDwcNatW3fe+CVLltC2bVtcXV3p1KkTK1euLLV96dKl9O7dm4YNG2KxWEhOTi61PSsri5EjR9KmTRtq165N8+bNefjhh8nJySkVZ7FYzloWLVp0MZcoIiIXYd3eLF77tmRauOf/1ZEArzomZyQiYo6t6Tms25uFk4OFuzU9nFQjvvVq80r/YADeX7uP5RvTzU2oBil38b548WJiY2OZNGkSSUlJBAUFER0dzeHDh8uMX7NmDQMHDmTo0KFs2LCBmJgYYmJi2LJliz0mLy+Pnj178vLLL5d5jPT0dNLT05k2bRpbtmxh/vz5xMXFMXTo0LNi582bx6FDh+xLTExMeS9RREQuwvG8QkYt2oDNgNu7NuNfXZqZnZKIiGnm/5oKwI2dmuDj6WpuMiIV7Jo2jRl+TSsAxn66iV2HT5qcUc1gMQzDKM8O4eHhhIaGMnPmTABsNht+fn6MHDmSsWPHnhXfv39/8vLyWLFihX1d9+7dCQ4OZvbs2aViU1NTCQgIYMOGDQQHB583jyVLlnD33XeTl5eHk5NTycVYLHz22WcXXbDn5ubi6elJTk4OHh4eF3UMEZGayDAM7nsvkW+3Z9LSqw5fjOxJHRcns9OqFvRsqli6n3I5HDtZQMRL31FYbOPTByMJaVHf7JREKlyx1cbd7yawdk8Wbbzrsmx4D2o7q3vIxbjQZ1O5Wt4LCwtJTEwkKirqzwM4OBAVFUV8fHyZ+8THx5eKB4iOjj5n/IU6c2FnCvczhg8fjpeXF2FhYcydO5dyfjchIiIXYf6aVL7dnomzowNv3NlFhXs1UNFd5AzDYOLEiTRp0oTatWsTFRXFzp07S8W88MILREZG4ubmRr169co8j7rISVXw0bo0CottBDXzpGvzemanI3JJODk6MGNAF7zcXdiReYIJn29R7XWJlat4P3r0KFarFW/v0lP+eHt7k5FR9lx/GRkZ5Yq/0Dyee+45hg0bVmr9s88+y8cff8yqVau4/fbbeeihh3jjjTfOeZyCggJyc3NLLSIiUj5bDuYweWUKAE/1aUcHX0+TM5J/6lJ0kZsyZQozZsxg9uzZJCQkUKdOHaKjo8nPz7fHFBYW0q9fPx588MHz5qcuclKZFVltvL92H6Dp4aT6a+zhyoyBwThY4JPEA7z7y16zU6rWqtxo87m5ufTp04f27dvz9NNPl9o2YcIEevToQZcuXXjiiSd4/PHHmTp16jmPNXnyZDw9Pe2Ln5/fJc5eRKR6OVlQzMiPNlBotdG7vTeDIjQoU3XwyiuvcN999zFkyBDat2/P7NmzcXNzY+7cuWXGv/7669xwww089thjtGvXjueee46uXbvau9gZhsFrr73G+PHjufXWW+ncuTPvvfce6enpLFu2zH6cZ555hkceeYROnTqdN7969erh4+NjX1xd1Z9YKo+vtmSQmVtAo7ou9Omk6eGk+ots5cW4G9sB8MLK7azalmlyRtVXuYp3Ly8vHB0dycws/T8kMzMTHx+fMvfx8fEpV/z5nDhxghtuuIG6devy2WefUatWrfPGh4eHc+DAAQoKCsrcPm7cOHJycuzL/v37y52TiEhNNnHZFvYezcPX05UpfTurhakauBRd5Pbu3UtGRkapGE9PT8LDwy+qG115usjpLTu53Ob9WtLyeFd4c5ydqlw7mchF+U+vAAaGNccw4OGPNrDlYM7f7yTlVq5PFGdnZ0JCQli9erV9nc1mY/Xq1URERJS5T0RERKl4gFWrVp0z/lxyc3Pp3bs3zs7OLF++/IK+ZU9OTqZ+/fq4uLiUud3FxQUPD49Si4iIXJhPEw+wdMNBHB0szBjYhXpuzmanJBXgUnSRO/OzIrrRlbeLnN6yk8speX82G9KyqeVo4a5wvYkkNYfFYuHZWzvQK9CL00VWhi74jUM5p81Oq9op94hCsbGxDB48mG7duhEWFsZrr71GXl4eQ4YMAWDQoEE0bdqUyZMnAzBq1Ciuuuoqpk+fTp8+fVi0aBHr169nzpw59mNmZWWRlpZGenrJHIE7duwAsL8Od6ZwP3XqFB988EGpb84bNWqEo6MjX3zxBZmZmXTv3h1XV1dWrVrFiy++yJgxY/7ZHRIRkbPsPnKSCZ+X9Gd+JCqQbv4NTM5IaooJEybY/7tLly7k5eUxdepUHn744TLjx40bR2xsrP333NxcFfByycz/o9X9/zr70qhu2Y1HItVVLUcHZt3Vldv/u4adh08ydP56ljwQoUFsK1C53+Xp378/06ZNY+LEiQQHB5OcnExcXJz92/S0tDQOHTpkj4+MjGThwoXMmTOHoKAgPvnkE5YtW0bHjh3tMcuXL6dLly706dMHgAEDBtClSxf7VHJJSUkkJCSwefNmWrduTZMmTezLmVfda9WqxaxZs4iIiCA4OJi33nqLV155hUmTJl383RERkbPkF1kZuXADpwqtRLZqyINXtzY7JalAl6KL3JmfFdWN7q/+rouc3rKTy+Vwbj5fbi75G3hIjwCTsxExh4drLebeG4qXuzPbDuUyatEGrDaNQF9RLqojzogRI9i3bx8FBQUkJCQQHh5u3/bDDz8wf/78UvH9+vVjx44dFBQUsGXLFm666aZS2++9914MwzhrOTMg3dVXX13mdsMw8Pf3B+CGG25gw4YNnDhxgpMnT5KcnMz999+Pg4P6GomIVKSXvkph26FcGtZx5tX+wTg6qJ97dXIpusgFBATg4+NTKiY3N5eEhIRyd6P7X3/XRU7kcvkwIY0iq0FIi/p0aqZZN6Tm8mvgxpxB3XBxcuDb7Yd54cvtZqdUbegdBhERuWDfbM1g/ppUAKbdEYS3h0b5ro4quoucxWJh9OjRPP/88wQGBhIQEMCECRPw9fUtNc1bWlqavSud1WolOTkZgNatW+Pu7q4uclJpFRRb+TDhj+nhIv3NTUakEujavD7T7whixMINzP11LwFebtwT4W92WlWeincREbkgB7NP89gnmwAYdmVLrmnT2OSM5FLp378/R44cYeLEiWRkZBAcHHxWF7m/vtl2povc+PHjefLJJwkMDDyri9zjjz9OXl4ew4YNIzs7m549exIXF1dqANqJEyeyYMEC++9dunQB4Pvvv+fqq6+2d5F75JFHMAyD1q1b26e1EzHTl5sOcfRkIT4ertzQ8Z91BRGpLm7u7Mu+Y6eY+vUOJi3fil8DN67W3w7/iMU43/wqNUxubi6enp7k5OSoT5yIyF8UW20MmLOW9fuOE+RXjyX3R2gKpMtEz6aKpfspFc0wDG6Z+SubD+bwWHQbhl+jcUBEzjAMg8c+2cQniQdwd3HikwcjaOujz97/daHPJv3lJSIif+v11TtZv+84dV2ceGNAFxXuIiJ/SEo7zuaDOTg7OTAwrLnZ6YhUKhaLhRf/1YnwgAacLCjm3/N+4/CJfLPTqrL015eIiJzXml1Hmfn9LgBevK0TzRu6mZyRiEjlMffXVABign1pUMfZ3GREKiFnJwfeuieEll51SM/J574F6zldaDU7rSpJxbuIiJzT0ZMFjFqcjGHAwDA//i/I1+yUREQqjUM5p4nbkgHAvZGaHk7kXOq5OTP33lDqudVi44EcHlmcjE1TyJWbincRESmTzWYwZslGjpwoILCxOxNv7mB2SiIilcr78fuw2gzCAxrQ3lf9eEXOx9+rDnPu6YazowNxWzN4+esUs1OqclS8i4hImd79ZS8/7DiCi5MDM+/sSm1nR7NTEhGpNPKLrHy0Lg2AIT3U6i5yIcICGvBy304AvPXjHhb98W9ILoyKdxEROcvG/dm8HFfyjfik/+tAG5+6JmckIlK5LE9O5/ipIprWq8317b3NTkekyvhXl2Y8fF0gAOOXbeHXXUdNzqjqUPEuIiKl5OYXMfKjDRTbDPp0asLAMD+zUxIRqVQMw2Dur3sBGBzZAkcHi8kZiVQtj0QFckuQL8U2gwc+SGTX4RNmp1QlqHgXERE7wzB4culm0rJO0ax+bV68rRMWi/4oFRH5q4S9WaRknKB2LUf6d9P0cCLlZbFYmNK3M91a1OdEfjFD5v/GsZMFZqdV6al4FxERuw8T0lix6RBODhZmDOyCZ+1aZqckIlLpzPuj1f1fXZvi6abPSZGL4VrLkbfuCaF5Azf2Z51m2PuJ5BdpCrnzUfEuIiIAJO7L4pkvtgLwWHQbujavb3JGIiKVz/6sU6zalgnAkEh/c5MRqeIaursw995QPFydSNx3nMc+2YRhaAq5c1HxLiIiZObm88AHSRRZS/q5D7uypdkpiYhUSu+v3YfNgJ6tvQj01mCeIv9U68buzL47BCcHC19sTOfVVb+bnVKlpeJdRKSGKyi28uAHiRw5UUAb77pM6dtZ/dxFRMpwqrDYPrXVkB7+5iYjUo1EtvbixX+VTCE347tdfJp4wOSMKicV7yIiNdzTy7eRlJaNh6sTcwaFUMfFyeyUREQqpaVJB8nNL6ZFQzeuadPY7HREqpU7Qv148OpWAIxduomEPcdMzqjyUfEuIlKDLUxI46N1aVgsMGNgF1o0rGN2SiIilZJhGMxfkwrA4Ah/HDQ9nEiFe6x3G27q5EOR1eD+DxLZkaEp5P5KxbuISA2VuO84k5ZvAWBM7zZcrVYkEZFz+mXXUXYdPkkdZ0f6dmtmdjoi1ZKDg4VX7ggmyK8e2aeK6D8nnuT92WanVWmoeBcRqYEO5+bz4AeJFFkNburkw0N/vKYmIiJlm/9rKgD9uvnh4arp4UQuFddajiwYEkrwHwX8XW+vJX63XqEHFe8iIjVOYbGNBz9M4vCJAq7wdmdq3yANUCcich6pR/P4bsdhAAZFtDA5G5Hqr56bMx/8J5zIVg3JK7QyeN46Vm/PNDst06l4FxGpYZ75YiuJ+46XDFB3TzcNUCci8jcWxKdiGHBNm0a0bORudjoiNYK7ixNz7w0lql1jCott3P9+Iss3ppudlqlUvIuI1CCL1qXxYULJAHWvD+iCv5cGqBMROZ+TBcUsWV8ybdW9PQJMzkakZnGt5cibd4dwa7AvxTaDUYs2sDAhzey0TKPiXUSkhkhKO87Ez7cC8Oj1V3BNWw1QJyLydz5Zv5+TBcW0bFSHXq29zE5HpMap5ejAq3cEc1d4cwwDnvxsM3N+2m12WqZQ8S4iUgMcPlEyQF2h1cYNHXwYfk1rs1MSEan0bDaDBfH7ABgSqenhRMzi4GDh+ZiO3H9VSwBeXJnC9G92YBiGyZldXhdVvM+aNQt/f39cXV0JDw9n3bp1541fsmQJbdu2xdXVlU6dOrFy5cpS25cuXUrv3r1p2LAhFouF5OTks46Rn5/P8OHDadiwIe7u7tx+++1kZpYetCAtLY0+ffrg5uZG48aNeeyxxyguLr6YSxQRqTYKi2089EESmbkFBDZ2Z9odGqBORORC/Pj7EfYezaOuixO3ddX0cCJmslgsjLuxHY9FtwHgje928cwX27DZak4BX+7iffHixcTGxjJp0iSSkpIICgoiOjqaw4cPlxm/Zs0aBg4cyNChQ9mwYQMxMTHExMSwZcsWe0xeXh49e/bk5ZdfPud5H3nkEb744guWLFnCjz/+SHp6Orfddpt9u9VqpU+fPhQWFrJmzRoWLFjA/PnzmThxYnkvUUSkWnluxTbW7ztOXVcn5gzqhrsGqBMRuSBzf90LwIAwPw3uKVJJDL+mNc/e2gGA+WtSefzTTRRbbSZndXlYjHK+axAeHk5oaCgzZ84EwGaz4efnx8iRIxk7duxZ8f379ycvL48VK1bY13Xv3p3g4GBmz55dKjY1NZWAgAA2bNhAcHCwfX1OTg6NGjVi4cKF9O3bF4CUlBTatWtHfHw83bt356uvvuLmm28mPT0db29vAGbPns0TTzzBkSNHcHZ2/ttry83NxdPTk5ycHDw8PMpzW0REKqWPf9vP459uwmKBdwd349q23manJOWkZ1PF0v2UC/V75gl6v/oTDhb48bFr8GvgZnZKIvIXS5MO8Ngnm7DaDG7s6MNrA4JxcXI0O62LcqHPpnK1vBcWFpKYmEhUVNSfB3BwICoqivj4+DL3iY+PLxUPEB0dfc74siQmJlJUVFTqOG3btqV58+b248THx9OpUyd74X7mPLm5uWzduvWCzyUiUl0k789m/LKSt5xio65Q4S4iUg7z/mh1j+7go8JdpBK6rWszZt3ZFWdHB77aksF/FqznVGH17jJdruL96NGjWK3WUgUygLe3NxkZGWXuk5GRUa74cx3D2dmZevXqnfM45zrPmW1lKSgoIDc3t9QiIlIdHDlRwAPvlwxQ17u9twaoExEph6y8QpYmHQTg3z01PZxIZXVDRx/evbcbtWs58vPOowx6dx25+UVmp3XJ1OjR5idPnoynp6d98fPzMzslEZF/rLDYxvAPk8jIzadVozpMvyNIIySLiJTDR+vSKCi20ampJ91a1Dc7HRE5j16BjfjgP2HUdXVi/b7jDJyzlmMnC8xO65IoV/Hu5eWFo6PjWaO8Z2Zm4uPjU+Y+Pj4+5Yo/1zEKCwvJzs4+53HOdZ4z28oybtw4cnJy7Mv+/fsvOCcRkcrqhS+3sS41i7ouJQPU1XWtZXZKIiJVRmGxjffiUwEY0sNfs3OIVAEhLRqwaFh3GtZxZmt6Lne8Fc+hnNNmp1XhylW8Ozs7ExISwurVq+3rbDYbq1evJiIiosx9IiIiSsUDrFq16pzxZQkJCaFWrVqljrNjxw7S0tLsx4mIiGDz5s2lRr1ftWoVHh4etG/fvszjuri44OHhUWoREanKlqzfb5+T+NX+wbRq5G5yRiIiVctXWw6RmVtAo7ou9OncxOx0ROQCdfD15OMHImji6cruI3n0fTOe1KN5ZqdVocr92nxsbCxvv/02CxYsYPv27Tz44IPk5eUxZMgQAAYNGsS4cePs8aNGjSIuLo7p06eTkpLC008/zfr16xkxYoQ9Jisri+TkZLZt2waUFObJycn2vuqenp4MHTqU2NhYvv/+exITExkyZAgRERF0794dgN69e9O+fXvuueceNm7cyNdff8348eMZPnw4Li4uF3+HRESqiI37s3nqjwHqHom6gqj2GqBORKQ8DMNg7i8lA9Xd071FlR25WqSmatXInSUPRODf0I2D2afp91Y8OzJOmJ1WhSl38d6/f3+mTZvGxIkTCQ4OJjk5mbi4OPvgcGlpaRw6dMgeHxkZycKFC5kzZw5BQUF88sknLFu2jI4dO9pjli9fTpcuXejTpw8AAwYMoEuXLqWmknv11Ve5+eabuf3227nyyivx8fFh6dKl9u2Ojo6sWLECR0dHIiIiuPvuuxk0aBDPPvts+e+KiEgVc+REAQ98kEhhsY3r23sz8loNUCciUl5JadlsPJCDs5MDd4Y3NzsdEbkIzeq78fEDEbT1qcuREwX0nxNP8v5ss9OqEOWe570609yvIlIVFVlt3PVOAuv2ZtGyUR0+H95D/dyrET2bKpbup5zP8IVJfLnpEHd0a8aUvkFmpyMi/0D2qULunfcbyfuzqePsyDuDQ4lo1dDstMp0SeZ5FxGRyueFL7ezbm8W7i5OzLlHA9SJiFyMg9mnidtS0mVzSA9NDydS1dVzc+aD/4QT2aoheYVWBs9bx9KkA1TltmsV7yIiVdiS9fuZvyYVKBmgrnVjDVAnInIx3otPxWoziGzVkHZN9FaGSHXg7uLE3HtDiWrXmMJiG7Efb+T+9xM5cqJqTiWn4l1EpIr6dddRxi3dDMDD1wVyvQaoExG5KKcKi/koIQ2Af6vVXaRaca3lyOy7QxjT+wpqOVr4ZlsmvV/9kS82ppudWrmpeBcRqYJSMnJ54P1Eim0G/xfky+jrAs1OSUSkyvo06SC5+cW0aOjGtW0bm52OiFQwJ0cHRlwbyPIRPWnfxIPjp4oY+dEGhn+YxLGTVacVXsW7iEgVcyjnNPfO/Y0TBcWEBTRgWr/OODhYzE5LRKRKstkM5v1aMj3ckEh/fZ6KVGPtmnjw+YgejLouECcHC19uPkTvV38ibsuhv9+5ElDxLiJSheTmFzFk3m9k5ObTurE7b9/TTfMQi4j8Az/uPMKeI3nUdXGibzc/s9MRkUuslqMDj1x/BcuG96CtT12O5RXywAdJPPzRBo7nFZqd3nmpeBcRqSIKi2089EESKRknaFTXhflDQvF008jyIiL/xNxfSlrd+4f64e7iZHI2InK5dGzqyecjejDimtY4OlhYvjGd61/9iVXbMs1O7ZxUvIuIVAGGYTB26SZ+2XUUN2dH5t0bSrP6bmanJSJSpf2eeYKfdx7FwQKDI/3NTkdELjMXJ0fGRLdh6YORtG7sztGTBdz33npiP04m51SR2emdRcW7iEgV8Oqq31madBBHBwuz7upKx6aeZqckIlLlzfs1FYDe7X3wa6AvREVqqiC/eqwY2ZP7r2qJgwWWJh2k92s/8v2Ow2anVoqKdxGRSm7RujRmfLcLgBdiOnJNG42ELCLyTx3PK2Rp0gEA/t1T08OJ1HSutRwZd2M7ljwQSYBXHTJzCxgy7zee+GQTufmVoxVexbuISCX2w47DPLVsCwAjr23NgLDmJmckIlI9LFyXRkGxjY5NPQj1r292OiJSSYS0qM/Kh3sxtGcAFgssXr+fG179iZ93HjE7NRXvIiKV1ZaDOQz/MAmrzeC2rk2Jvf4Ks1MSEakWiqw23o/fB8CQyAAsFk0PJyJ/qu3syISb27N4WAQtGrqRnpPPPe+u48nPNnOyoNi0vFS8i4hUQgeOn2LI/N/IK7TSs7UXL93WWX9ciohUkK+2ZJCRm4+Xuws3BzUxOx0RqaTCAhrw1aheDI5oAcDChDRueO0n1uw+ako+Kt5FRCqZnFNF3DvvN46cKKCtT13+e3dXnJ30cS0iUlHOTA93T/cWuDg5mpyNiFRmbs5OPHNrRxbeF06z+rU5cPw0d76dwKTPt3Cq8PK2wuuvQRGRSqSg2Mqw99ez6/BJfDxcmTckFA9XzeUuIlJRktKOk7w/G2dHB+7qrnFEROTCRLbyIm70ldwVXvK5sSB+Hze9/jN5l/E1ehXvIiKVhM1mMGbJJhL2ZlHXxYl5Q0Jp4lnb7LSkhpo1axb+/v64uroSHh7OunXrzhu/ZMkS2rZti6urK506dWLlypWlthuGwcSJE2nSpAm1a9cmKiqKnTt3lop54YUXiIyMxM3NjXr16pV5nrS0NPr06YObmxuNGzfmscceo7jYvP6HUvWcaXW/NdgXL3cXk7MRkarE3cWJF/7VifeHhuHr6cqVVzSijovTZTu/incRkUpiytc7+GJjOk4OFmbfE0K7Jh5mpyQ11OLFi4mNjWXSpEkkJSURFBREdHQ0hw+XPd/tmjVrGDhwIEOHDmXDhg3ExMQQExPDli1b7DFTpkxhxowZzJ49m4SEBOrUqUN0dDT5+fn2mMLCQvr168eDDz5Y5nmsVit9+vShsLCQNWvWsGDBAubPn8/EiRMr9gZItZWefZqvtmQAMKSHpocTkYvTK7ARcY9cydgb217W81oMwzAu6xkrsdzcXDw9PcnJycHDQ380i8jl8/7afUz4Y0q46f2CuD2kmckZSWVhxrMpPDyc0NBQZs6cCYDNZsPPz4+RI0cyduzYs+L79+9PXl4eK1assK/r3r07wcHBzJ49G8Mw8PX15dFHH2XMmDEA5OTk4O3tzfz58xkwYECp482fP5/Ro0eTnZ1dav1XX33FzTffTHp6Ot7e3gDMnj2bJ554giNHjuDs7Py316Znfc320lcpzP5xNxEtG/LRsO5mpyMiAlz4s0kt7yIiJlu1LZNJn5cU7rHXX6HCXUxVWFhIYmIiUVFR9nUODg5ERUURHx9f5j7x8fGl4gGio6Pt8Xv37iUjI6NUjKenJ+Hh4ec85rnO06lTJ3vhfuY8ubm5bN26tcx9CgoKyM3NLbVIzXSqsJiP1qUB8O+eanUXkapHxbuIiImS92cz8qMkbAYMCPVj5LWtzU5JarijR49itVpLFcgA3t7eZGRklLlPRkbGeePP/CzPMctznr+e439NnjwZT09P++Ln53fB55PqZWnSQXJOF9GioRvXtm1sdjoiIuWm4l1ExCT7juUxdP5v5BfZuOqKRjwX01FzuYtUsHHjxpGTk2Nf9u/fb3ZKYgKbzWDeryUD1d0b6Y+jgz5rRaTqUfEuImKCrLxC7p33G8fyCung68Gsu7pSy1EfyWI+Ly8vHB0dyczMLLU+MzMTHx+fMvfx8fE5b/yZn+U5ZnnO89dz/C8XFxc8PDxKLVLz/LTzCLuP5FHXxYl+3fT2hYhUTfpLUUTkMssvsnLfe+vZezSPpvVqM+/eUNwv4zQjIufj7OxMSEgIq1evtq+z2WysXr2aiIiIMveJiIgoFQ+watUqe3xAQAA+Pj6lYnJzc0lISDjnMc91ns2bN5ca9X7VqlV4eHjQvn37Cz6O1Dxzf00F4I5QP33eikiVpU8vEZHLyGozeGRxMon7juPh6sT8IaE09nA1Oy2RUmJjYxk8eDDdunUjLCyM1157jby8PIYMGQLAoEGDaNq0KZMnTwZg1KhRXHXVVUyfPp0+ffqwaNEi1q9fz5w5cwCwWCyMHj2a559/nsDAQAICApgwYQK+vr7ExMTYz5uWlkZWVhZpaWlYrVaSk5MBaN26Ne7u7vTu3Zv27dtzzz33MGXKFDIyMhg/fjzDhw/HxUXzdUvZdmae4Kffj+BgKXllXkSkqlLxLiJymRiGwXMrtvHVlgycHR2YM6gbgd51zU5L5Cz9+/fnyJEjTJw4kYyMDIKDg4mLi7MPDpeWloaDw58v70VGRrJw4ULGjx/Pk08+SWBgIMuWLaNjx472mMcff5y8vDyGDRtGdnY2PXv2JC4uDlfXP7+8mjhxIgsWLLD/3qVLFwC+//57rr76ahwdHVmxYgUPPvggERER1KlTh8GDB/Pss89e6lsiVdi8NakAXN/eG78GbuYmIyLyD1zUPO+zZs1i6tSpZGRkEBQUxBtvvEFYWNg545csWcKECRNITU0lMDCQl19+mZtuusm+3TAMJk2axNtvv012djY9evTgzTffJDAwEIAffviBa665psxjr1u3jtDQUFJTUwkIOHvaj/j4eLp3v7B5PDX3q4hcKoZhMOXrHbz5w24AZgzswi1BviZnJVWBnk0VS/ezZsk+VUj3yavJL7KxeFh3wls2NDslEZGzXLJ53hcvXkxsbCyTJk0iKSmJoKAgoqOjS/U/+6s1a9YwcOBAhg4dyoYNG4iJiSEmJoYtW7bYY6ZMmcKMGTOYPXs2CQkJ1KlTh+joaPLz84GSb/QPHTpUavnPf/5DQEAA3bp1K3W+b7/9tlRcSEhIeS9RRKTCvbrqd3vh/tytHVS4i4hcBh+t209+kY0Ovh6EBTQwOx0RkX+k3C3v4eHhhIaGMnPmTKBkEBs/Pz9GjhzJ2LFjz4rv378/eXl5rFixwr6ue/fuBAcHM3v2bAzDwNfXl0cffZQxY8YAkJOTg7e3N/Pnz2fAgAFnHbOoqIimTZsycuRIJkyYAGBved+wYQPBwcHluSQ7fRsvIpfCjNU7eWXV7wBMvLk9/+559ltCIueiZ1PF0v2sOYqsNq6c8j2HcvKZ3i+I20OamZ2SiEiZLknLe2FhIYmJiURFRf15AAcHoqKiiI+PL3Of+Pj4UvEA0dHR9vi9e/eSkZFRKsbT05Pw8PBzHnP58uUcO3bMPnDOX91yyy00btyYnj17snz58vNeT0FBAbm5uaUWEZGK9OYPu+2F+5M3tVXhLiJymcRtyeBQTj5e7i7cHNTE7HRERP6xchXvR48exWq12gesOcPb25uMjIwy98nIyDhv/Jmf5Tnmu+++S3R0NM2a/fkNqru7O9OnT2fJkiV8+eWX9OzZk5iYmPMW8JMnT8bT09O++Plp3k8RqTjv/LyHl+NSAHgsug3DrmxlckYiIjXH3F/3AnB39+a4ODmanI2IyD9X5UabP3DgAF9//TUff/xxqfVeXl7Exsbafw8NDSU9PZ2pU6dyyy23lHmscePGldonNzdXBbyIVIj5v+7l+S+3AzA6KpDh17Q2OSMRkZojKe04G9KycXZ04K7wFmanIyJSIcrV8u7l5YWjoyOZmZml1mdmZuLj41PmPj4+PueNP/PzQo85b948GjZseM6C/K/Cw8PZtWvXObe7uLjg4eFRahER+ac+WLuPp7/YBsCIa1oz6rpAkzMSEalZ5v2aCsAtwb40qutibjIiIhWkXMW7s7MzISEhrF692r7OZrOxevVqIiIiytwnIiKiVDzAqlWr7PEBAQH4+PiUisnNzSUhIeGsYxqGwbx58xg0aBC1atX623yTk5Np0kR9nETk8ln8Wxrjl5XMpnH/lS15tPcVWCwWk7MSEak5DuWcZuXmQwAM6eFvbjIiIhWo3K/Nx8bGMnjwYLp160ZYWBivvfYaeXl59sHjBg0aRNOmTZk8eTIAo0aN4qqrrmL69On06dOHRYsWsX79eubMmQOAxWJh9OjRPP/88wQGBhIQEMCECRPw9fUlJiam1Lm/++479u7dy3/+85+z8lqwYAHOzs506dIFgKVLlzJ37lzeeeed8l6iiMhF+STxAGOXbgbg3z0CGHtjWxXuIiKX2Xvx+7DaDLq3bEAHX0+z0xERqTDlLt779+/PkSNHmDhxIhkZGQQHBxMXF2cfcC4tLQ0Hhz8b9CMjI1m4cCHjx4/nySefJDAwkGXLltGxY0d7zOOPP05eXh7Dhg0jOzubnj17EhcXh6ura6lzv/vuu0RGRtK2bdsyc3vuuefYt28fTk5OtG3blsWLF9O3b9/yXqKISLl9nnyQxz7ZiGHAoIgWTLi5nQp3EZHL7HShlYUJaUDJl6giItVJued5r84096uIXIwvNx1i5EdJ2AwYGNacF2I64uCgwl0qhp5NFUv3s3p7Pz6VCZ9vpXkDN74fczWO+iwWkSrgkszzLiIipcVtyeDhRRuwGXBHt2Yq3EVETJJfZGXW97sB+E+vABXuIlLtqHgXEblI327LZORHSVhtBrd1acrk2zqrcBcRMclH69LIyM3H19OV/qGa+ldEqh8V7yIiF+GHHYd56MMkiqwG/xfky9R+QWrlERExyelCK//9oaTVfcS1gbg4OZqckYhIxVPxLiJSTj/vPMKw9xMptNq4qZMPr96hwl1ExEwfJuzjyIkCmtWvTd+QZmanIyJySah4FxEphzW7j/KfBespLLZxfXtvXh/QBSdHfZSKiJglr6CYN/9odX/42kCcnfSZLCLVkz7dREQu0Lq9WQydv56CYhvXtm3MzDu7UEuFu4iIqd6L38exvEJaNHTjtq5NzU5HROSS0V+dIiIXIHHfcYbMW8fpIitXXtGI/97VVX0qRURMdiK/iLd+Kml1H3VdoN6EEpFqTZ9wIiJ/I3l/NvfOXUdeoZUerRsy554QXGupcBcRMdv8X1PJPlVEy0Z1uDVYre4iUr2peBcROY8tB3MY9G4CJwqKCQ9owDuDQlW4i4hUAjmni3j75z0AjI66QgOHiki1p+JdROQcEvcd5653EsjNL6Zbi/rMvTeU2s4q3EVEKoN3f9lLbn4xV3i706dTE7PTERG55JzMTkBEpDJavT2T4QuTyC+y0bV5PeYNCaWOiz4yRUQqg+xThcz9ZS+gVncRqTn0l6iIyP/4+Lf9jPtsM1abwTVtGjHrrq64OevjUkSksnj75z2cLCimXRMPbujgY3Y6IiKXhf4aFRH5g2EYzPp+F9O++R2AviHNmHxbJ00HJyJSiRw7WcC8X1MBeCQqEAe1uotIDaHiXUQEsNoMnvliK+/F7wPgoatb8Vh0GywW/VEoIlKZzPlpD6cKrXRq6sn17b3NTkdE5LJR8S4iNV5+kZXYj5NZuTkDiwUm3dyee3sEmJ2WiIj8j8Mn8lkQnwpA7PVX6AtWEalRVLyLSI2Wm1/EfQvWk7A3C2dHB17pH8TNnX3NTktERMow+4c95BfZCParx9VtGpmdjojIZaXiXURqrMzcfAbPXUdKxgncXZyYMyiEyFZeZqclIiJlyMzN54OEkq5Nj/ZWq7uI1Dwq3kWkRtp95CSD3l3HwezTNKrrwvwhoXTw9TQ7LREROYdZ3++isNhGqH99erbWF60iUvOoeBeRGicp7ThD5//G8VNFBHjV4b1/h+HXwM3stERE5BwOZp9m0br9ADyivu4iUkOpeBeRGuW7lEwe+jCJ/CIbQc08mXtvKA3dXcxOS0REzmPW97sotNqIaNlQ3ZtEpMZS8S4iNcbH6/czbulmrDaDq65oxH/v6kodF30MiohUZvuzTvHxbyWt7rG9rzA5GxER8+ivVhGp9gzD4L8/7Gbq1zsAuK1rU16+vTO1HB1MzkxERP7OG9/tpNhm0CvQi1D/BmanIyJiGhXvIlKtWW0Gz36xlQXxJSMUP3BVK564oY36S4qIVAGpR/P4NOkgUNLXXUSkJlPxLiLVVkGxldjFG/ly8yEAJt7cnn/3DDA5KxERuVAzVu/EajO4pk0jujavb3Y6IiKmuqh3RmfNmoW/vz+urq6Eh4ezbt2688YvWbKEtm3b4urqSqdOnVi5cmWp7YZhMHHiRJo0aULt2rWJiopi586dpWL8/f2xWCyllpdeeqlUzKZNm+jVqxeurq74+fkxZcqUi7k8EakGcvOLGDx3HV9uPkQtRwszBnZR4S4iUoXsOnySZcklre6x17cxORsREfOVu3hfvHgxsbGxTJo0iaSkJIKCgoiOjubw4cNlxq9Zs4aBAwcydOhQNmzYQExMDDExMWzZssUeM2XKFGbMmMHs2bNJSEigTp06REdHk5+fX+pYzz77LIcOHbIvI0eOtG/Lzc2ld+/etGjRgsTERKZOncrTTz/NnDlzynuJIlLFHc7Np/9ba1m7Jwt3FyfmDwnjliBfs9MSEZFyeH31TmwGXN/em07NPM1OR0TEdBbDMIzy7BAeHk5oaCgzZ84EwGaz4efnx8iRIxk7duxZ8f379ycvL48VK1bY13Xv3p3g4GBmz56NYRj4+vry6KOPMmbMGABycnLw9vZm/vz5DBgwAChpeR89ejSjR48uM68333yTp556ioyMDJydnQEYO3Ysy5YtIyUl5YKuLTc3F09PT3JycvDw8LjgeyIilcfuIycZPHcdB46fxsvdhflDQunYVH/0SdWlZ1PF0v2sGnZknOCG13/CMGDlw71o76v/VyJSfV3os6lcLe+FhYUkJiYSFRX15wEcHIiKiiI+Pr7MfeLj40vFA0RHR9vj9+7dS0ZGRqkYT09PwsPDzzrmSy+9RMOGDenSpQtTp06luLi41HmuvPJKe+F+5jw7duzg+PHjZeZWUFBAbm5uqUVEqq61e47Rb3Y8B46fxr+hG0sfjFThLiJSBb2++ncMA27q5KPCXUTkD+UasO7o0aNYrVa8vb1Lrff29j5n63ZGRkaZ8RkZGfbtZ9adKwbg4YcfpmvXrjRo0IA1a9Ywbtw4Dh06xCuvvGI/TkBAwFnHOLOtfv2zBzmZPHkyzzzzzN9et4hUboZh8M7Pe3kpLgWrzaBzM0/m3huKl7uL2amJiEg5bU3PYeXmDCwWGHWdRpgXETmjyow2Hxsba//vzp074+zszP3338/kyZNxcbm4P9DHjRtX6ri5ubn4+fn941xF5PI5WVDM459sZOXmki/7/tWlKS/+qxO1nR1NzkxERC7Ga9+WDFp8c2df2vjUNTkbEZHKo1zFu5eXF46OjmRmZpZan5mZiY+PT5n7+Pj4nDf+zM/MzEyaNGlSKiY4OPicuYSHh1NcXExqaipt2rQ553n+eo7/5eLictGFv4iYb9fhE9z/fiK7j+RRy9HChJvbc0/3FprDXUSkitp8IIdV2zJxsMCo6wLNTkdEpFIpV593Z2dnQkJCWL16tX2dzWZj9erVRERElLlPREREqXiAVatW2eMDAgLw8fEpFZObm0tCQsI5jwmQnJyMg4MDjRs3tp/np59+oqioqNR52rRpU+Yr8yJSta3cfIhbZ/7K7iN5eHu4sGhYBIMi/FW4i4hUYa+s2gFATHBTWjd2NzkbEZHKpdyvzcfGxjJ48GC6detGWFgYr732Gnl5eQwZMgSAQYMG0bRpUyZPngzAqFGjuOqqq5g+fTp9+vRh0aJFrF+/3j6Fm8ViYfTo0Tz//PMEBgYSEBDAhAkT8PX1JSYmBigZjC4hIYFrrrmGunXrEh8fzyOPPMLdd99tL8zvvPNOnnnmGYYOHcoTTzzBli1beP3113n11Vcr4j6JSCVRbLXxclwKb/+8F4DuLRvwxsCuNKqrt2hERKqypLTjfL/jCI4OFh5Wq7uIyFnKXbz379+fI0eOMHHiRDIyMggODiYuLs4+OFxaWhoODn826EdGRrJw4ULGjx/Pk08+SWBgIMuWLaNjx472mMcff5y8vDyGDRtGdnY2PXv2JC4uDldXV6Dk9fZFixbx9NNPU1BQQEBAAI888kip/uqenp588803DB8+nJCQELy8vJg4cSLDhg276JsjIpXLkRMFjFiYRMLeLADuv7Ilj0W3wcmxXC8RiYhIJfTqqt8BuL1rU/y96picjYhI5VPued6rM839KlJ5Je7L4qEPk8jMLaCOsyPT+gVxY6cmf7+jSBWnZ1PF0v2snNbtzeKOt+JxcrDw/Zir8WvgZnZKIiKXzYU+m6rMaPMiUjMZhsGCNak8/+V2im0GrRu7M/vuEPWFFBGpRs60ut8R6qfCXUTkHPSuqYhUWqcKixm9OJmnv9hGsc2gT+cmfD68hwp3kctg1qxZ+Pv74+rqSnh4OOvWrTtv/JIlS2jbti2urq506tSJlStXltpuGAYTJ06kSZMm1K5dm6ioKHbu3FkqJisri7vuugsPDw/q1avH0KFDOXnypH17amoqFovlrGXt2rUVd+Fy2a3ZfZT4PcdwdnRg+DWtzU5HRKTSUvEuIpXS3qN5/GvWGj5PTsfRwcL4Pu2YObALdVz0wpDIpbZ48WJiY2OZNGkSSUlJBAUFER0dzeHDh8uMX7NmDQMHDmTo0KFs2LCBmJgYYmJi2LJliz1mypQpzJgxg9mzZ5OQkECdOnWIjo4mPz/fHnPXXXexdetWVq1axYoVK/jpp5/KHLvm22+/5dChQ/YlJCSk4m+CXBaGYdhb3QeE+dG0Xm2TMxIRqbzU5/0v1A9OpHL4ZmsGj368kRMFxXi5uzDrzi6Et2xodloipjDj2RQeHk5oaCgzZ84ESqaF9fPzY+TIkYwdO/as+P79+5OXl8eKFSvs67p3705wcDCzZ8/GMAx8fX159NFHGTNmDAA5OTl4e3szf/58BgwYwPbt22nfvj2//fYb3bp1AyAuLo6bbrqJAwcO4OvrS2pqKgEBAWzYsIHg4OCLujY96yuXn3ce4Z531+Hs5MDPj1+Dt4er2SmJiFx2F/psUsu7iFQaVpvBlLgUhr2fyImCYrq1qM+XD/dU4S5yGRUWFpKYmEhUVJR9nYODA1FRUcTHx5e5T3x8fKl4gOjoaHv83r17ycjIKBXj6elJeHi4PSY+Pp569erZC3eAqKgoHBwcSEhIKHXsW265hcaNG9OzZ0+WL1/+zy5YTJNfZOWZL7YBcHd4CxXuIiJ/Q++fikilcOxkAaMWJfPLrqMADOnhz5M3taOWpoETuayOHj2K1Wq1TwF7hre3NykpKWXuk5GRUWZ8RkaGffuZdeeLady4cantTk5ONGjQwB7j7u7O9OnT6dGjBw4ODnz66afExMSwbNkybrnlljJzKygooKCgwP57bm7uea9fLp+X41LYdfgkjeq6MPJa9XUXEfk7Kt5FxHTJ+7N56INE0nPyqV3LkZdu78StwU3NTktEKhkvLy9iY2Ptv4eGhpKens7UqVPPWbxPnjyZZ5555nKlKBdoza6jzPs1FYApfTtTv46zuQmJiFQBatISEdMYhsGHCfu4Y3Y86Tn5BHjVYdnwHircRUzk5eWFo6MjmZmZpdZnZmbi4+NT5j4+Pj7njT/z8+9i/ndAvOLiYrKyss55Xijpn79r165zbh83bhw5OTn2Zf/+/eeMlcsj53QRY5ZsBODO8OZc06bx3+whIiKg4l1ETHLsZAEjPtrAU59todBqo3d7bz4f0YM2PnXNTk2kRnN2diYkJITVq1fb19lsNlavXk1ERESZ+0RERJSKB1i1apU9PiAgAB8fn1Ixubm5JCQk2GMiIiLIzs4mMTHRHvPdd99hs9kIDw8/Z77Jyck0adLknNtdXFzw8PAotYi5nlm+lfScfFo0dOOpm9qZnY6ISJWh1+ZF5LIyDIMvNx9i4udbycorxNHBwqO9r+DBq1phsVjMTk9EgNjYWAYPHky3bt0ICwvjtddeIy8vjyFDhgAwaNAgmjZtyuTJkwEYNWoUV111FdOnT6dPnz4sWrSI9evXM2fOHAAsFgujR4/m+eefJzAwkICAACZMmICvry8xMTEAtGvXjhtuuIH77ruP2bNnU1RUxIgRIxgwYAC+vr4ALFiwAGdnZ7p06QLA0qVLmTt3Lu+8885lvkNysb7afIilGw7iYIFX7gjS9J8iIuWgT0wRuWyOnChg4udb+GpLyeBTbbzrMq1fEJ2aeZqcmYj8Vf/+/Tly5AgTJ04kIyOD4OBg4uLi7APOpaWl4eDw58t7kZGRLFy4kPHjx/Pkk08SGBjIsmXL6Nixoz3m8ccfJy8vj2HDhpGdnU3Pnj2Ji4vD1fXPEcY//PBDRowYwXXXXYeDgwO33347M2bMKJXbc889x759+3BycqJt27YsXryYvn37XuI7IhXhcG4+T362GYAHrmpFSIsGJmckIlK1aJ73v9DcryKXhmEYLN+YztPLt3L8VBFODhYeuqY1I65pjbOTeu+InI+eTRVL99MchmHw7/m/8f2OI7Rv4sGy4T30+S8i8ocLfTap5V1ELqnDJ/IZ/9kWvtlWMlBVuyYeTO3bmY5N1douIlJTfLRuP9/vOIKzowOv9g9W4S4ichFUvIvIJWEYBp8npzNp+VZyTpe0to+8NpAHr26lP9pERGqQfcfyeP7LbQA8Ft1GA5OKiFwkFe8iUuEyc/N56rPNfLu9ZNqnDr4eTO0bRHtfvaIqIlKTWG0GsR9v5FShlfCABgztGWB2SiIiVZaKdxGpMIZhsDTpIM98sZXc/GJqOVoYdV0g91/VilqOam0XEalp3vppN4n7juPu4sT0O4JwcNCsIiIiF0vFu4hUiIycklGEv0spaW3v1NSTaf2C9HqkiEgNtTU9h1dX/Q7ApP9rT7P6biZnJCJStal4F5F/xDAMliQe4LkV2ziRX4yzowOjrw9kWK+WOKm1XUSkRsovshK7eCNFVoPe7b3pG9LM7JRERKo8Fe8ictHSs08zdulmfvr9CABBfvWY1rczgd5qbRcRqcleWfU7OzJP4OXuzOTbOmGx6HV5EZF/SsW7iJSbYRgs/m0/z3+5nZMFxTg7OfDo9VcwtGeAWttFRGq4tXuO8fbPewB46bbONHR3MTkjEZHqQcW7iJTLwezTjP10Ez/vPApA1+b1mNI3iNaN3U3OTEREzHYiv4hHP96IYUD/bn5Etfc2OyURkWpDxbuIXJAiq40P1+5j2je/c7KgGBcnBx6LbsOQHgE4avRgEREBnv1iGwezT9Osfm3G39zO7HRERKoVFe8icl6GYfD11kxejkth79E8ALq1qM+Uvp1p2Uit7SIiUuKbrRksSTyAxQKv3BFMXddaZqckIlKtqHgXkXNKSjvOi19uZ/2+4wB4uTvzyPVXMCC0uVrbRUTE7ujJAsYt3QzAsF4tCQtoYHJGIiLVz0WNLDVr1iz8/f1xdXUlPDycdevWnTd+yZIltG3bFldXVzp16sTKlStLbTcMg4kTJ9KkSRNq165NVFQUO3futG9PTU1l6NChBAQEULt2bVq1asWkSZMoLCwsFWOxWM5a1q5dezGXKFKj7TuWx/APk7jtv2tYv+84rrUcePja1vzw2DXcFd5ChbuIiNgZhsG4pZs5lldIW5+6xPa+wuyURESqpXIX74sXLyY2NpZJkyaRlJREUFAQ0dHRHD58uMz4NWvWMHDgQIYOHcqGDRuIiYkhJiaGLVu22GOmTJnCjBkzmD17NgkJCdSpU4fo6Gjy8/MBSElJwWaz8dZbb7F161ZeffVVZs+ezZNPPnnW+b799lsOHTpkX0JCQsp7iSI11vG8Qp79YhtRr/zIl5sPYbHAHd2a8cOYa4jt3QZ3F72sIyIipS1JPMCqbZnUcrTwyh3BuDg5mp2SiEi1ZDEMwyjPDuHh4YSGhjJz5kwAbDYbfn5+jBw5krFjx54V379/f/Ly8lixYoV9Xffu3QkODmb27NkYhoGvry+PPvooY8aMASAnJwdvb2/mz5/PgAEDysxj6tSpvPnmm+zZUzIVSWpqKgEBAWzYsIHg4ODyXJJdbm4unp6e5OTk4OHhcVHHEKmK8ousvBefyszvdpGbXwzAlVc0YtyNbWnXRP8WRMykZ1PF0v2sWPuzTnHj6z9zsqCYJ25oy4NXtzI7JRGRKudCn03lankvLCwkMTGRqKioPw/g4EBUVBTx8fFl7hMfH18qHiA6Otoev3fvXjIyMkrFeHp6Eh4efs5jQkmB36DB2f2pbrnlFho3bkzPnj1Zvnz5ea+noKCA3NzcUotITWKzGXyefJDrpv/IiytTyM0vpl0TD94fGsZ7/w5T4S4iIudktRk8umQjJwuKCfWvz7ArW5qdkohItVaud2CPHj2K1WrF27v0nJ3e3t6kpKSUuU9GRkaZ8RkZGfbtZ9adK+Z/7dq1izfeeINp06bZ17m7uzN9+nR69OiBg4MDn376KTExMSxbtoxbbrmlzONMnjyZZ5555jxXLFJ9rd1zjBdXbmfTgRwAfDxcGRPdhn91aao+7SIi8rfe/WUP6/ZmUcfZken9gvXsEBG5xKpcB9aDBw9yww030K9fP+677z77ei8vL2JjY+2/h4aGkp6eztSpU89ZvI8bN67UPrm5ufj5+V265EUqgV2HT/DSVyl8u71knAp3FycevLoV/+4RQG1n9VMUEZG/l5KRy7Svfwdgws3tad7QzeSMRESqv3IV715eXjg6OpKZmVlqfWZmJj4+PmXu4+Pjc974Mz8zMzNp0qRJqZj/7buenp7ONddcQ2RkJHPmzPnbfMPDw1m1atU5t7u4uODi4vK3xxGpDo6cKOC1b39n0W/7sdoMHB0s3BnWnFFRgXi569+BiIhcmIJiK48s3kih1UZUu8b0D1XDh4jI5VCuPu/Ozs6EhISwevVq+zqbzcbq1auJiIgoc5+IiIhS8QCrVq2yxwcEBODj41MqJjc3l4SEhFLHPHjwIFdffTUhISHMmzcPB4e/Tz05ObnUFwIiNdHpQitvrN7J1VO/58OENKw2g97tvfnmkSt5LqajCncRESmX177dyfZDuTSo48zk2zpjseh1eRGRy6Hcr83HxsYyePBgunXrRlhYGK+99hp5eXkMGTIEgEGDBtG0aVMmT54MwKhRo7jqqquYPn06ffr0YdGiRaxfv97ecm6xWBg9ejTPP/88gYGBBAQEMGHCBHx9fYmJiQH+LNxbtGjBtGnTOHLkiD2fMy33CxYswNnZmS5dugCwdOlS5s6dyzvvvHPxd0ekCssvsvLZhoO89u3vZOYWABDUzJOn+rQnLODswR5FRET+zvrULN76cTcAL/6rI43q6gtgEZHLpdzFe//+/Tly5AgTJ04kIyOD4OBg4uLi7APOpaWllWoVj4yMZOHChYwfP54nn3ySwMBAli1bRseOHe0xjz/+OHl5eQwbNozs7Gx69uxJXFwcrq6uQElL/a5du9i1axfNmjUrlc9fZ7p77rnn2LdvH05OTrRt25bFixfTt2/f8l6iSJV27GQBH6xN4/21qRw9WQhAs/q1efyGttzcqQkOGlBIREQuwvZDuTz0YRI2A27r2pQbOurtRhGRy6nc87xXZ5r7VaqyXYdP8O4ve1madJCCYhsAvp6uDO3Vkru7N8fFSYPRiVRFejZVLN3Pi/PrrqPc/34iJwuKucLbnU8ejMTDtZbZaYmIVAsX+myqcqPNi8ifDMNgze5jvPPzHr7f8Wd3kqBmnvynV0tu6OhDLcdyDW0hIiJSytKkAzz+ySaKbQbdWzbgrXu6qXAXETGBineRKqig2MoXGw/xzs97SMk4AYDFAr3be/OfXi3p1qK+BhASEZF/xDAM/vvDbqZ+vQOA/wvyZVq/znqTS0TEJCreRaqQ43mFLFyXxoI1qRw+UTIIXe1ajtzRrRn/7hlAi4Z1TM5QRESqg2KrjYnLt7IwIQ2A+69qyRPRbTVuioiIiVS8i1QBe46cZO6ve/kk8QD5RSX92b09XLg3MoA7w5rj6abXF0VEpGKcKixm5MINrE45jMUCT/9fBwZH+pudlohIjafiXaSSMgyDhL1ZvPPzHlanHObM0JIdfD24r1dLburUBGcn9WcXEZGKc/RkAUPn/8bGAzm4ODnw+oAu3NDRx+y0REQEFe8ilU6R1caXmw7xzi972HIw174+ql1jhvZsSfeWDdSfXUREKtzeo3kMnruOtKxT1HerxTuDQwlpUd/stERE5A8q3kUqifTs03y24SDvx+8jIzcfANdaDtzetaQ/e6tG7iZnKCIi1VVS2nGGzv+N46eK8GtQmwVDwmip546ISKWi4l3ERMdOFrBySwZfJKezLjXLvr5RXRcGR7TgzvAWNKjjbGKGIiJS3X2zNYORH22goNhG52aevDs4lEZ1XcxOS0RE/oeKd5HL7ER+Ed9szWT5xnR+2XUUq82wbwvzb0C/bs24JdhXU/GIiMgl9158Kk8v34rNgOvaNuaNO7vg5qw/D0VEKiN9OotcBvlFVr5LOczy5HS+23GYwmKbfVvHph7cEuTLzZ198a1X28QsRUSkprDZDKZ8vYPZP+4GYGBYc567tQNOjhoIVUSkslLxLnKJFFlt/LLrKF8kp/PNtkxOFhTbt7VqVIdbgpryf0FN1KdQREQuq4JiK49/sonPk9MBeCy6DQ9d3UqDoYqIVHIq3kUqkM1m8FtqFss3pvPVlgyy8grt25rWq83NQU24JciX9k089EeSiIhcdjmni7j//fWs3ZOFk4OFl2/vzO0hzcxOS0RELoCKd5F/yDAMthzMZfnGg6zYdIhDOfn2bQ3rONOnc0nB3rV5fRwcVLCLiIg50rNPc++8dfyeeRJ3FyfevLsrvQIbmZ2WiIhcIBXvIheh2Gpj08EcfthxhC82prP3aJ59W10XJ6I7+nBLkC+RrRqq/6CIiJhu+6Fc7p23jszcArw9XJh3bxjtfT3MTktERMpBxbvIBbDZDFIyTrBm91HW7D7Gur1Zpfqwuzg5ENXOm/8L8uXqNo1wraWR4kVEpHL4dddR7n8/kZMFxQQ2dmf+v8NoqgFSRUSqHBXvImUwDIO9R/NYs/sY8buPEb/nWKn+6wCetWsR0bIh0R29ub69D+4u+uckIiKVh81m8EniAZ78bDPFNoPwgAbMuacbnm61zE5NREQugqoNkT8cyjnNr7uOsWb3UeJ3HyvVdx3AzdmRsIAGRLZqSGQrL9o38VAfdhERqXROFhTzyfr9LIjfZ+/W9X9Bvkzr1xkXJ70ZJiJSVal4lxrr2MkC1u7Jsr8K/9d+6wDOjg50aV6PHq29iGzVkM7N6uHspP7rIiJSOe07lsf8NaksWX/A3rWrrqsT9/VqyYhrWusLZxGRKk7Fu9QYx04WkLw/mzW7j7Fm9zG2H8ottd3BAp2a1SOyVUN6tPIipEV9ajurhUJERCovwzD4ddcx5v26l+92HMYwSta3bFSHIZH+3Na1GXXUrUtEpFrQp7lUO8VWG3uO5rH9UC7bD53442cuh08UnBXbxrsuka1LXoMPC2iAZ231AxQRkcrvdKGVpRsOMP/XVHYePmlff3WbRgzpEUCv1l5qaRcRqWZUvEuVlnOqiG1/FOfbD+WyPSOX3zNPUlhsKzM+wKsO3Vs2IKKVFxEtG9KorstlzlhEROTiHTh+ivfj97Hot/3knC4CoI6zI31DmjE40p+WjdxNzlBERC4VFe9SJVhtBvuO5ZVqSd9+KJf0/xlU7gw3Z0fa+tSlXROPP5a6tPHx0IjwIiJS5RiGwbq9Wcz7NZVvtmVg++PV+OYN3Bgc6U+/bs3wcNWbYyIi1Z0qGak0DMPgWF4h6dmnOXj8NAezT7P7SMnr7zsyTnC6yFrmfs3q1/6zSP+jYG/ewE2vC4qISJWWX2Rl+cZ05v+ayra/jNPSo3VDhkQGcE3bxjjqWSciUmOoeJfLpshqIyMnnwPHT5cU6Nl//jxTrBec43V3ANdaDrTx/mtrugdtm9RVa4OIiFQrGTn5fLB2HwvXpZGVVwiUPAP/1aUZQ3r4c4V3XZMzFBERM1zUvFezZs3C398fV1dXwsPDWbdu3XnjlyxZQtu2bXF1daVTp06sXLmy1HbDMJg4cSJNmjShdu3aREVFsXPnzlIxWVlZ3HXXXXh4eFCvXj2GDh3KyZMnS8Vs2rSJXr164erqip+fH1OmTLmYy5OLYLMZ5JwqYkfGCb5LyeT9tft46asURn60gdvfXEP3F1dzxfiv6DXlewa+vZZHl2zklVW/s+i3/fy88yh7juZRUGzDYgFvDxe6Nq/HzZ2bMPyaVsy8swurH72Krc/cwOcjevLS7Z0ZHOlPWEADFe4iIpeInvWXx4n8IrYczOGLjenM/G4nD7yfSM+Xv2Pm97vIyivE19OVsTe2Ze2465h8WycV7iIiNVi5W94XL15MbGwss2fPJjw8nNdee43o6Gh27Njx/+3de0yT59sH8G8LtByUdsooVBDQeNg8zynDbT/zal/xsInbMpUY52lzMbrMqImYhTGzPzxmf+iMmjcqLkadS6YmumgQT1MrGnETDyPqy9h8pRjZaBEEanu9fyCVSgsUoX2o30/StDzP9TzcV+8239wtpYiNjW1Wf+HCBWRmZmLNmjV47733sHfvXkybNg2FhYUYPHgwAGD9+vXYtGkTdu/ejZSUFGRnZyM9PR03b95EeHg4AGDWrFkoKytDXl4e7HY75s2bh4ULF2Lv3r0AAJvNhgkTJsBkMmHbtm0oKirC/PnzodfrsXDhwhe5j14qtXYHrI/tqKyxP72uR+VjO2xNtz3dbnPdtsNWa3d9PU1LNKFq9NJHoJc+AkZ9OHrpI9HrlYbbCfpIxOnC+V3qREQBxqzvWLZaO/58WI0/K2pQ+rAaJRXVKK2owZ8Pq1Hx9J31541O7oF5byfjv183IDSEuUhERIBKpC1LrmdSU1MxatQofP/99wAAp9OJxMREfPHFF8jKympWP2PGDFRXV+PIkSOubW+99RaGDx+Obdu2QURgNBqxfPlyrFixAgBgtVphMBiQm5uLmTNn4tatW3j99ddx+fJlvPnmmwCAY8eOYfLkybh37x6MRiO2bt2Kr776ChaLBRqNBgCQlZWFQ4cO4Y8//mhTbzabDTqdDlarFdHR0b7cLZ1OROBwCuqeOFFrd6D2iRN1dgdq7U7UPnGgznXteFZjd6LuydMaj9sdsD1+8nRBXo/KGnuLf7beFvrIMBh1Eej1SoRrkd6wOG+4HdNNA5WKn88jImqrQGQTs9531ho7/qyobrg8rEFpxbNF+j9eFuiNYrppkNwzCkk9o5DcMxL/NTAWg3vpOmxsRESkbG3NJp/eea+vr8eVK1ewatUq1za1Wg2TyQSz2ezxGLPZjGXLlrltS09Px6FDhwAAJSUlsFgsMJlMrv06nQ6pqakwm82YOXMmzGYz9Hq9K8wBwGQyQa1Wo6CgAB988AHMZjP+85//uMK88fesW7cO//77L1555ZVmY6urq0Nd3bPv/rbZbM1q2mvJ3kKUWWvhcAqcInjiaLh2OAUOEThd13Db9sT5bF/jsQ3XHTa0VqlVgD5SA11EGHQRYdBHPr2OCIPu6XZ9k336yDBEP/1ZGxriv4ESEVGHC7as70z/c/Z/cbSoDKUV1fi3xt5i7avdtUjuGYmknlFIiYlCUs/Ipwv2SHTnR8CIiKgNfFq8P3z4EA6HAwaDwW27wWDw+oq3xWLxWG+xWFz7G7e1VPP8n+mFhoaiR48ebjUpKSnNztG4z1Ogr1mzBqtXr/be8Au4/n9W/FlR0ynnBhr+/FwbqkZ4WAjCw9TQhjZch4eGQNvsOsRzbVgIosPdF+i6yDB004TyP7UTEb2kgi3rO/OF+vvWx/jt70rXz7HdtUiOiWq2SE/qGcWvKiUiohf2UifJqlWr3N4psNlsSExM7JBzr84YjMf1DoSoVQhRAyFqNUJUKqjVQIhKhRC1Cmq16tntp9euS2Ot67YKoWoVtKENC3EuromIiFrXmS/UfzgiAaOTeyDp6TvoUVygExFRJ/IpZWJiYhASEoLy8nK37eXl5YiLi/N4TFxcXIv1jdfl5eWIj493qxk+fLir5sGDB27nePLkCf755x+383j6PU1/x/O0Wi20Wq3Xfl/E2P6vdsp5iYiIOlOwZX1nvlA/JEGHIQn8bDoREfmHT/++VKPRYOTIkcjPz3dtczqdyM/PR1pamsdj0tLS3OoBIC8vz1WfkpKCuLg4txqbzYaCggJXTVpaGiorK3HlyhVXzcmTJ+F0OpGamuqqOXv2LOx2u9vvGTBggN8/A0dERNRVBVvWa7VaREdHu12IiIi6JPHR/v37RavVSm5urty8eVMWLlwoer1eLBaLiIjMnj1bsrKyXPXnz5+X0NBQ2bhxo9y6dUtycnIkLCxMioqKXDVr164VvV4vhw8flmvXrklGRoakpKTI48ePXTUTJ06UESNGSEFBgZw7d0769esnmZmZrv2VlZViMBhk9uzZcv36ddm/f79ERkbK9u3b29yb1WoVAGK1Wn29W4iIiDpFILKJWU9EROQ/bc0mnxfvIiKbN2+W3r17i0ajkdGjR8vFixdd+8aOHStz5sxxqz9w4ID0799fNBqNDBo0SI4ePeq23+l0SnZ2thgMBtFqtTJ+/HgpLi52q6moqJDMzEzp1q2bREdHy7x586Sqqsqt5vfff5d33nlHtFqt9OrVS9auXetTXwx0IiJSmkBlE7OeiIjIP9qaTT5/z3swU/L3vBMR0cuJ2dSxeH8SEZHStDWbfPrMOxERERERERH5HxfvRERERERERArHxTsRERERERGRwnHxTkRERERERKRwXLwTERERERERKRwX70REREREREQKFxroAShJ47fm2Wy2AI+EiIioQWMm8ZtdOwaznoiIlKatWc/FexNVVVUAgMTExACPhIiIyF1VVRV0Ol2gh9HlMeuJiEipWst6lfClfBen04n79++je/fuUKlUL3Qum82GxMRE/P3334iOju6gEQZGsPQSLH0AwdNLsPQBsBclCpY+RARVVVUwGo1Qq/lptxfFrPeMvShPsPQBBE8vwdIHEDy9BEsfbc16vvPehFqtRkJCQoeeMzo6uks/kJoKll6CpQ8geHoJlj4A9qJEwdAH33HvOMz6lrEX5QmWPoDg6SVY+gCCp5dg6KMtWc+X8ImIiIiIiIgUjot3IiIiIiIiIoXj4r2TaLVa5OTkQKvVBnooLyxYegmWPoDg6SVY+gDYixIFSx+kXMH0GGMvyhMsfQDB00uw9AEETy/B0kdb8R/WERERERERESkc33knIiIiIiIiUjgu3omIiIiIiIgUjot3IiIiIiIiIoXj4p2IiIiIiIhI4bh4fwFbtmxBcnIywsPDkZqaikuXLrVY/9NPP2HgwIEIDw/HkCFD8Msvv/hppN6tWbMGo0aNQvfu3REbG4tp06ahuLi4xWNyc3OhUqncLuHh4X4asWfffPNNszENHDiwxWOUOB8AkJyc3KwXlUqFxYsXe6xX0nycPXsW77//PoxGI1QqFQ4dOuS2X0Tw9ddfIz4+HhERETCZTLh9+3ar5/X1ufaiWurDbrdj5cqVGDJkCKKiomA0GvHJJ5/g/v37LZ6zPY/RjtDanMydO7fZuCZOnNjqeZU0JwA8PmdUKhU2bNjg9ZyBmhPqWpj1gc+WpoIl75n1zSkpV5j1Dfw9JwDzvjVcvLfTjz/+iGXLliEnJweFhYUYNmwY0tPT8eDBA4/1Fy5cQGZmJhYsWICrV69i2rRpmDZtGq5fv+7nkbs7c+YMFi9ejIsXLyIvLw92ux0TJkxAdXV1i8dFR0ejrKzMdSktLfXTiL0bNGiQ25jOnTvntVap8wEAly9fdusjLy8PAPDxxx97PUYp81FdXY1hw4Zhy5YtHvevX78emzZtwrZt21BQUICoqCikp6ejtrbW6zl9fa51hJb6qKmpQWFhIbKzs1FYWIiff/4ZxcXFmDp1aqvn9eUx2lFamxMAmDhxotu49u3b1+I5lTYnANzGX1ZWhp07d0KlUuGjjz5q8byBmBPqOpj1ysiW5wVD3jPr3SktV5j1gZkTgHnfKqF2GT16tCxevNj1s8PhEKPRKGvWrPFYP336dJkyZYrbttTUVPn88887dZy+evDggQCQM2fOeK3ZtWuX6HQ6/w2qDXJycmTYsGFtru8q8yEi8uWXX0rfvn3F6XR63K/E+RARASAHDx50/ex0OiUuLk42bNjg2lZZWSlarVb27dvn9Ty+Ptc62vN9eHLp0iUBIKWlpV5rfH2MdgZPvcyZM0cyMjJ8Ok9XmJOMjAwZN25cizVKmBNSNma9zn+DaqNgzXtmvfJzhVnv3zkRYd57wnfe26G+vh5XrlyByWRybVOr1TCZTDCbzR6PMZvNbvUAkJ6e7rU+UKxWKwCgR48eLdY9evQISUlJSExMREZGBm7cuOGP4bXo9u3bMBqN6NOnD2bNmoW//vrLa21XmY/6+nrs2bMH8+fPh0ql8lqnxPl4XklJCSwWi9v9rtPpkJqa6vV+b89zLRCsVitUKhX0en2Ldb48Rv3p9OnTiI2NxYABA7Bo0SJUVFR4re0Kc1JeXo6jR49iwYIFrdYqdU4o8Jj1ys2WYMt7Zr3ycwVg1itxTl7GvOfivR0ePnwIh8MBg8Hgtt1gMMBisXg8xmKx+FQfCE6nE0uXLsXbb7+NwYMHe60bMGAAdu7cicOHD2PPnj1wOp0YM2YM7t2758fRuktNTUVubi6OHTuGrVu3oqSkBO+++y6qqqo81neF+QCAQ4cOobKyEnPnzvVao8T58KTxvvXlfm/Pc83famtrsXLlSmRmZiI6Otprna+PUX+ZOHEifvjhB+Tn52PdunU4c+YMJk2aBIfD4bG+K8zJ7t270b17d3z44Yct1il1TkgZmPXKzJZgzHtmvfJzhVnfQElzAryceR8a6AGQcixevBjXr19v9TMgaWlpSEtLc/08ZswYvPbaa9i+fTu+/fbbzh6mR5MmTXLdHjp0KFJTU5GUlIQDBw606dU4pdqxYwcmTZoEo9HotUaJ8/GysNvtmD59OkQEW7dubbFWqY/RmTNnum4PGTIEQ4cORd++fXH69GmMHz8+YON6ETt37sSsWbNa/WdOSp0Tos7UlbMeCM7nLbNe2Zj1yvUy5j3feW+HmJgYhISEoLy83G17eXk54uLiPB4TFxfnU72/LVmyBEeOHMGpU6eQkJDg07FhYWEYMWIE7ty500mj851er0f//v29jknp8wEApaWlOHHiBD799FOfjlPifABw3be+3O/tea75S2OYl5aWIi8vr8VX4j1p7TEaKH369EFMTIzXcSl5TgDg119/RXFxsc/PG0C5c0KBwax3p9Rs6ep5z6xXdq4w65U3J41e1rzn4r0dNBoNRo4cifz8fNc2p9OJ/Px8t1dFm0pLS3OrB4C8vDyv9f4iIliyZAkOHjyIkydPIiUlxedzOBwOFBUVIT4+vhNG2D6PHj3C3bt3vY5JqfPR1K5duxAbG4spU6b4dJwS5wMAUlJSEBcX53a/22w2FBQUeL3f2/Nc84fGML99+zZOnDiBnj17+nyO1h6jgXLv3j1UVFR4HZdS56TRjh07MHLkSAwbNsznY5U6JxQYzHp3Ss2Wrp73zHrl5gqzXnlz0tRLm/eB/X95Xdf+/ftFq9VKbm6u3Lx5UxYuXCh6vV4sFouIiMyePVuysrJc9efPn5fQ0FDZuHGj3Lp1S3JyciQsLEyKiooC1YKIiCxatEh0Op2cPn1aysrKXJeamhpXzfO9rF69Wo4fPy53796VK1euyMyZMyU8PFxu3LgRiBZERGT58uVy+vRpKSkpkfPnz4vJZJKYmBh58OCBiHSd+WjkcDikd+/esnLlymb7lDwfVVVVcvXqVbl69aoAkO+++06uXr3q+s+sa9euFb1eL4cPH5Zr165JRkaGpKSkyOPHj13nGDdunGzevNn1c2vPNX/3UV9fL1OnTpWEhAT57bff3J43dXV1Xvto7TEaiF6qqqpkxYoVYjabpaSkRE6cOCFvvPGG9OvXT2pra732orQ5aWS1WiUyMlK2bt3q8RxKmRPqOpj1ysiWpoIp75n1ys0VZn1g5qS1Xhq9zHnPxfsL2Lx5s/Tu3Vs0Go2MHj1aLl686No3duxYmTNnjlv9gQMHpH///qLRaGTQoEFy9OhRP4+4OQAeL7t27XLVPN/L0qVLXX0bDAaZPHmyFBYW+n/wTcyYMUPi4+NFo9FIr169ZMaMGXLnzh3X/q4yH42OHz8uAKS4uLjZPiXPx6lTpzw+nhrH63Q6JTs7WwwGg2i1Whk/fnyzHpOSkiQnJ8dtW0vPNX/3UVJS4vV5c+rUKa99tPYYDUQvNTU1MmHCBHn11VclLCxMkpKS5LPPPmsWzEqfk0bbt2+XiIgIqays9HgOpcwJdS3M+sBnS1PBlPfM+hy3bUrKFWZ9A3/PSWu9NHqZ814lItLed+2JiIiIiIiIqPPxM+9ERERERERECsfFOxEREREREZHCcfFOREREREREpHBcvBMREREREREpHBfvRERERERERArHxTsRERERERGRwnHxTkRERERERKRwXLwTERERERERKRwX70REREREREQKx8U7ERERERERkcJx8U5ERERERESkcFy8ExERERERESnc/wMLkd6k+nFB1gAAAABJRU5ErkJggg==", "text/plain": [ "
" ] @@ -413,7 +434,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 37, "metadata": {}, "outputs": [ { @@ -432,7 +453,7 @@ "" ] }, - "execution_count": 19, + "execution_count": 37, "metadata": {}, "output_type": "execute_result" } diff --git a/poetry.lock b/poetry.lock index fdc1a02..73b3ded 100644 --- a/poetry.lock +++ b/poetry.lock @@ -22,6 +22,16 @@ files = [ {file = "alabaster-0.7.13.tar.gz", hash = "sha256:a27a4a084d5e690e16e01e03ad2b2e552c61a65469419b907243193de1a84ae2"}, ] +[[package]] +name = "antlr4-python3-runtime" +version = "4.9.3" +description = "ANTLR 4.9.3 runtime for Python 3.7" +optional = false +python-versions = "*" +files = [ + {file = "antlr4-python3-runtime-4.9.3.tar.gz", hash = "sha256:f224469b4168294902bb1efa80a8bf7855f24c99aef99cbefc1bcd3cce77881b"}, +] + [[package]] name = "appdirs" version = "1.4.4" @@ -1745,8 +1755,8 @@ files = [ [package.dependencies] numpy = [ {version = ">=1.23.3", markers = "python_version > \"3.10\""}, - {version = ">=1.21.2", markers = "python_version > \"3.9\" and python_version <= \"3.10\""}, {version = ">1.20", markers = "python_version <= \"3.9\""}, + {version = ">=1.21.2", markers = "python_version > \"3.9\" and python_version <= \"3.10\""}, ] [package.extras] @@ -1922,6 +1932,21 @@ files = [ {file = "numpy-1.26.2.tar.gz", hash = "sha256:f65738447676ab5777f11e6bbbdb8ce11b785e105f690bc45966574816b6d3ea"}, ] +[[package]] +name = "omegaconf" +version = "2.3.0" +description = "A flexible configuration library" +optional = false +python-versions = ">=3.6" +files = [ + {file = "omegaconf-2.3.0-py3-none-any.whl", hash = "sha256:7b4df175cdb08ba400f45cae3bdcae7ba8365db4d165fc65fd04b050ab63b46b"}, + {file = "omegaconf-2.3.0.tar.gz", hash = "sha256:d5d4b6d29955cc50ad50c46dc269bcd92c6e00f5f90d23ab5fee7bfca4ba4cc7"}, +] + +[package.dependencies] +antlr4-python3-runtime = "==4.9.*" +PyYAML = ">=5.1.0" + [[package]] name = "opt-einsum" version = "3.3.0" @@ -3139,6 +3164,17 @@ files = [ ml-dtypes = ">=0.3.1" numpy = ">=1.16.0" +[[package]] +name = "toml" +version = "0.10.2" +description = "Python Library for Tom's Obvious, Minimal Language" +optional = false +python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*" +files = [ + {file = "toml-0.10.2-py2.py3-none-any.whl", hash = "sha256:806143ae5bfb6a3c6e736a764057db0e6a0e05e338b5630894a5f779cabb4f9b"}, + {file = "toml-0.10.2.tar.gz", hash = "sha256:b3bda1d108d5dd99f4a20d24d9c348e91c4db7ab1b749200bded2f839ccbe68f"}, +] + [[package]] name = "tomli" version = "2.0.1" @@ -3413,4 +3449,4 @@ testing = ["big-O", "jaraco.functools", "jaraco.itertools", "more-itertools", "p [metadata] lock-version = "2.0" python-versions = ">=3.9,<=3.11" -content-hash = "5fc2e88ec569a667ab5076bf43acf88c3bf3d7d359756359b31a9ccdd25148d7" +content-hash = "4432397d6d9799bde5f98e13cc9ae6f1db83ff2e7784cb4ad9d2682510051636" diff --git a/pyproject.toml b/pyproject.toml index 8dabb99..532b703 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -54,6 +54,7 @@ ott-jax = "^0.4.2" matscipy = "^0.8.0" torch = {version = "2.1.0+cpu", source = "torchcpu"} wget = "^3.2" +omegaconf = "^2.3.0" [tool.poetry.group.dev.dependencies] # mypy = ">=1.8.0" - consider in the future @@ -66,6 +67,7 @@ ipykernel = ">=6.25.1" [tool.poetry.group.docs.dependencies] sphinx = "^7.2.6" sphinx-rtd-theme = "^1.3.0" +toml = "^0.10.2" [[tool.poetry.source]] name = "torchcpu" @@ -73,7 +75,6 @@ url = "https://download.pytorch.org/whl/cpu" priority = "explicit" [tool.ruff] -ignore = ["F811", "E402"] exclude = [ ".git", ".venv", @@ -85,6 +86,7 @@ show-fixes = true line-length = 88 [tool.ruff.lint] +ignore = ["F811", "E402"] select = [ "E", # pycodestyle "F", # Pyflakes @@ -93,6 +95,9 @@ select = [ # "D", # pydocstyle - consider in the future ] +[tool.ruff.lint.isort] +known-third-party = ["wandb"] + [tool.pytest.ini_options] testpaths = "tests/" addopts = "--cov=lagrangebench --cov-fail-under=50" @@ -101,6 +106,7 @@ filterwarnings = [ "ignore::DeprecationWarning:^(?!.*lagrangebench).*" ] +[tool.poetry_bumpversion.file."lagrangebench/__init__.py"] [build-system] requires = ["poetry-core"] build-backend = "poetry.core.masonry.api" diff --git a/requirements_cuda.txt b/requirements_cuda.txt index 0bc59df..535f6eb 100644 --- a/requirements_cuda.txt +++ b/requirements_cuda.txt @@ -11,6 +11,7 @@ jax_md>=0.2.8 jmp>=0.0.4 jraph>=0.0.6.dev0 matscipy>=0.8.0 +omegaconf>=2.3.0 optax>=0.1.7 ott-jax>=0.4.2 pyvista @@ -18,3 +19,4 @@ PyYAML torch==2.1.0+cpu wandb wget +yacs>=0.1.8 diff --git a/tests/case_test.py b/tests/case_test.py index 373eb28..f7b77db 100644 --- a/tests/case_test.py +++ b/tests/case_test.py @@ -29,7 +29,8 @@ def setUp(self): box, self.metadata, input_seq_length=3, # two past velocities - isotropic_norm=False, + cfg_neighbors={"backend": "jaxmd_vmap", "multiplier": 1.25}, + cfg_model={"isotropic_norm": False, "magnitude_features": False}, noise_std=0.0, external_force_fn=None, ) @@ -63,7 +64,7 @@ def setUp(self): ) self.particle_types = np.array([0, 0, 0]) - key, features, target_dict, neighbors = self.case.allocate( + _, _, _, neighbors = self.case.allocate( self.key, (self.position_data, self.particle_types) ) self.neighbors = neighbors diff --git a/tests/pushforward_test.py b/tests/pushforward_test.py index 06d77d8..83ef2c1 100644 --- a/tests/pushforward_test.py +++ b/tests/pushforward_test.py @@ -2,8 +2,8 @@ import jax import numpy as np +from omegaconf import OmegaConf -from lagrangebench import PushforwardConfig from lagrangebench.train.strats import push_forward_sample_steps @@ -11,10 +11,12 @@ class TestPushForward(unittest.TestCase): """Class for unit testing the push-forward functions.""" def setUp(self): - self.pf = PushforwardConfig( - steps=[-1, 20000, 50000, 100000], - unrolls=[0, 1, 3, 20], - probs=[4.05, 4.05, 1.0, 1.0], + self.pf = OmegaConf.create( + { + "steps": [-1, 20000, 50000, 100000], + "unrolls": [0, 1, 3, 20], + "probs": [4.05, 4.05, 1.0, 1.0], + } ) self.key = jax.random.PRNGKey(42) diff --git a/tests/rollout_test.py b/tests/rollout_test.py index a559c48..f1f32f4 100644 --- a/tests/rollout_test.py +++ b/tests/rollout_test.py @@ -1,5 +1,4 @@ import unittest -from argparse import Namespace from functools import partial import haiku as hk @@ -9,6 +8,7 @@ from jax import config as jax_config from jax import jit, vmap from jax_md import space +from omegaconf import OmegaConf from torch.utils.data import DataLoader jax_config.update("jax_enable_x64", True) @@ -17,7 +17,7 @@ from lagrangebench.data import H5Dataset from lagrangebench.data.utils import get_dataset_stats, numpy_collate from lagrangebench.evaluate import MetricsComputer -from lagrangebench.evaluate.rollout import _forward_eval, eval_batched_rollout +from lagrangebench.evaluate.rollout import _eval_batched_rollout, _forward_eval from lagrangebench.utils import broadcast_from_batch @@ -25,21 +25,27 @@ class TestInferBuilder(unittest.TestCase): """Class for unit testing the evaluate_single_rollout function.""" def setUp(self): - self.config = Namespace( - data_dir="tests/3D_LJ_3_1214every1", # Lennard-Jones dataset - input_seq_length=3, # two past velocities - metrics=["mse"], - n_rollout_steps=100, - isotropic_norm=False, - noise_std=0.0, + self.cfg = OmegaConf.create( + { + "dataset_path": "tests/3D_LJ_3_1214every1", # Lennard-Jones dataset + "model": { + "input_seq_length": 3, # two past velocities + "isotropic_norm": False, + }, + "eval": { + "train": {"metrics": ["mse"]}, + "n_rollout_steps": 100, + }, + "train": {"noise_std": 0.0}, + } ) data_valid = H5Dataset( split="valid", - dataset_path=self.config.data_dir, + dataset_path=self.cfg.dataset_path, name="lj3d", - input_seq_length=self.config.input_seq_length, - extra_seq_length=self.config.n_rollout_steps, + input_seq_length=self.cfg.model.input_seq_length, + extra_seq_length=self.cfg.eval.n_rollout_steps, ) self.loader_valid = DataLoader( dataset=data_valid, batch_size=1, collate_fn=numpy_collate @@ -47,7 +53,7 @@ def setUp(self): self.metadata = data_valid.metadata self.normalization_stats = get_dataset_stats( - self.metadata, self.config.isotropic_norm, self.config.noise_std + self.metadata, self.cfg.model.isotropic_norm, self.cfg.train.noise_std ) bounds = np.array(self.metadata["bounds"]) @@ -57,8 +63,8 @@ def setUp(self): self.case = case_builder( box, self.metadata, - self.config.input_seq_length, - noise_std=self.config.noise_std, + self.cfg.model.input_seq_length, + noise_std=self.cfg.train.noise_std, ) self.key = jax.random.PRNGKey(0) @@ -139,7 +145,7 @@ def model(x): for n_extrap_steps in [0, 5, 10]: with self.subTest(n_extrap_steps): - example_rollout_batch, metrics_batch, neighbors = eval_batched_rollout( + example_rollout_batch, metrics_batch, neighbors = _eval_batched_rollout( forward_eval_vmap=forward_eval_vmap, preprocess_eval_vmap=preprocess_eval_vmap, case=self.case, @@ -148,7 +154,7 @@ def model(x): traj_batch_i=traj_batch_i, neighbors=neighbors, metrics_computer_vmap=metrics_computer_vmap, - n_rollout_steps=self.config.n_rollout_steps, + n_rollout_steps=self.cfg.eval.n_rollout_steps, n_extrap_steps=n_extrap_steps, t_window=isl, ) @@ -183,7 +189,7 @@ def model(x): "Wrong rollout prediction", ) - total_steps = self.config.n_rollout_steps + n_extrap_steps + total_steps = self.cfg.eval.n_rollout_steps + n_extrap_steps assert example_rollout_batch.shape[1] == total_steps diff --git a/tests/runner_test.py b/tests/runner_test.py new file mode 100644 index 0000000..4dae722 --- /dev/null +++ b/tests/runner_test.py @@ -0,0 +1,59 @@ +"""Runner test with a linear model and LJ dataset.""" + +import unittest + +from omegaconf import OmegaConf + +from lagrangebench.defaults import defaults +from lagrangebench.runner import train_or_infer + + +class TestRunner(unittest.TestCase): + """Test whether train_or_infer runs through.""" + + def setUp(self): + self.cfg = OmegaConf.create( + { + "mode": "all", + "dataset_path": "tests/3D_LJ_3_1214every1", + "model": { + "name": "linear", + "input_seq_length": 3, + }, + "train": { + "step_max": 10, + "noise_std": 0.0, + }, + "eval": { + "n_rollout_steps": 5, + "train": { + "n_trajs": 2, + "metrics_stride": 5, + "metrics": ["mse"], + "out_type": "none", + }, + "infer": { + "n_trajs": 2, + "metrics_stride": 1, + "metrics": ["mse"], + "out_type": "none", + }, + }, + "logging": { + "log_steps": 1, + "eval_steps": 5, + "wandb": False, + "ckp_dir": "/tmp/ckp", + }, + } + ) + # overwrite defaults with user-defined config + self.cfg = OmegaConf.merge(defaults, self.cfg) + + def test_runner(self): + out = train_or_infer(self.cfg) + self.assertEqual(out, 0) + + +if __name__ == "__main__": + unittest.main()